« Bounty Slander | Main | Changing the Definition of Science »

May 18, 2008

Comments

I can't help but remember you talking about a teacher who always had an error in his lectures, and over the course of the semester made them harder and harder to find. The last lecture, which was the most complex, didn't have a flaw.

I was more or less surrounded by people of average sanity when I grew up, but they still seemed pretty nuts to me. (Completely off-topic, but I really wonder why people tell children known-fantasies such as Santa Clause and the Easter Bunny)

I don't think its really accurate to say most people are insane. Clearly they need to be sane for the world to keep on running. IMO, they are insane when they can afford to be - which is pretty common in politics, religion and untestable hypothesizes, but a LOT less common in the workplace. Most people just aren't interested in truth because truth doesn't pay out in a lot of circumstances. I wonder if science might change in your direction (and how quickly?) if betting markets were more commonly accepted?

You're thinking of Kai Chang's My Favorite Liar. It's linked in the Post Scriptum.

When they taught me about the scientific method in high school, the last step was "go back to the beginning and repeat." There was also a lot about theories replacing other theories and then being replaced later, new technologies leading to new measurements, and new ideas leading to big debates.

I don't remember if they explicitly said, "You can do science right and still get the wrong answer," but it was very strongly (and logically) implied.

I don't know what you were taught, but I expect it was something similar.

All this "emotional understanding" stuff sounds like your personal problem. I don't mean that it isn't important or that I don't have sympathy for any pain you suffered. I just think it's an emotion issue, not a science issue.

That is why I am a rationalist and a libertarian. Everyone is totally and completely responsible for every choice they make and everything they do. Every one, from toddler to parent, no one can protect you from your responsibility. That is the difference between a child and a real adult - the adult knows and accepts their responsibility, the less than mature tries to deny or hide from it.

@Brian: Emotion is the driver of everything, even rationality.

Why do you lack social curiosity? Do you think it's a neuro-quirk, or just a normal quirk?

Eliezer,

this article is GOLD!

People who've had their trust broken in the sanity of the people around them, seem to be able to evaluate strange ideas on their merits.

I'd say instead that this prod produces a high variance response. Some rise to the challenge and become more rational, while others fall even deeper into delusion. Yes the most rational people tend to have experienced this, but so also for the most irrational people.

Science is captitalised, suggesting an abnormal definition of the term. Can this definition be found somewhere? What is "Science" - if it it different from science - and if it is not different, then why captialise it?

I started to seriously think about rationality only when I started to think about AI, trying to understand grounding. When I saw that meaning, communication, correctness and understanding are just particular ways to characterize probabilistic relations between "representation" and "represented", it all started to come together, and later was transferred to human reasoning and beyond. So, it was the enigma of AI that acted as a catalyst in my case, not a particular delusion (or misplaced trust). Most of the things I read on the subject were outright confused or in the state of paralyzed curiosity, not deluded in a particular technical way. But so is "Science". The problem is in settling for status quo, walking along the trodden track where it's possible to do better.

Thus, I see this post as a demonstration by example of how important it is to break the trust in all of your cherished curiosity stoppers.

I tried, but didn't find a flaw, anyone else?

The idea that flaws need to be added - and that the final lecture will be flawless - is both silly and presumptuous. There will almost certainly be flaws, whether they are added or not, and our judgment is not adequate to determine whether our own work has them or not.

Eliezer, all of your problems with "Science" seem to stem not from any problems with the method itself, but from your personal tendency to treat the method as a revelation that people have an emotional investment in: in other words, a religion.

There are a variety of ways people can fail to put science into practice. One of the most pernicious is failing to apply it in situations where it is clearly called for, because we have an emotional investment in holding positions that we don't want to disturb. One especially dangerous subtype of this error is when the important subject is our 'scientific' reasoning and the conclusions we derived from it. It is even more dangerous than the general case because it doesn't just involve a corruption of our ability to deal with one specific set of problems, but a corruption of the general method we must use to rationally investigate the world. Instead of merely having a blind spot, we lose our sight completely, while at the same time losing our ability to detect that we're blind.

You are guilty of this error. That doesn't mean that you've gained a unique insight that must be shared with the world at all costs. This is a very old and trivial insight that most people worth listening to have already produced independently.

I agree with your general view, but I came to the same view by a more conventional route: I got a PhD in philosophy of science. If you study philosophy of science, you soon find that nobody really knows what science is. The "Science" you describe is essentially Popper's view of science, which has been extensively criticized and revised by later philosophers. For example, how can you falsify a theory? You need a fact (an "observation") that conflicts with the theory. But what is a fact, if not a true mini-theory? And how can you know that it is true, if theories can be falsified, but not proven? I studied philosophy because I was looking for a rational foundation for understanding the world; something like what Descartes promised with "cogito ergo sum". I soon learned that there is no such foundation. Making a rational model of the world is not like making a home, where the first step is to build a solid foundation. It is more like trying to patch a hole in a sinking ship, where you don't have the luxury of starting from scratch. I view science as an evolutionary process. Changes must be made in small increments: "Natura non facit saltus".

One flaw I see in your post is that the rule "You cannot trust any rule" applies recursively to itself. (Anything you can do, I can do meta.) I would say "Doubt everything, but one at a time, not all at once."

@Caledonian: If it is an old and trivial insight, why do most scientists and near all non-scientists ignore it?

As Eli said in his post, there is a difference between saying the words and _knowing_, on a gut level, what it means - only then have you truly incorporated the knowledge and it will aid you in your quest to understand the world.

Also, you say:
Caledonian:
but from your personal tendency to treat the method as a revelation that people have an emotional investment in

Of course people have an emotional investment in this stuff!! Do not make the old mistake of confusing rationality with not being emotional (I guess Star Trek with Mr. Spock is guilty of that, at least for our generation)

And what could be more emotional than dumping the legends of your tribe/parents/priests/elders?

For rationality and emotion in science, read for instance here:
The Passionate Scientist: Emotion in Scientific Cognition
Paul Thagard
http://cogsci.uwaterloo.ca/Articles/Pages/passionate.html

You will have to study [...] and social psychology [...]

Please could you recommend some social psychology material?

As you explain so clearly here, the point is to think for ourselves instead of trusting in any person or system. This valuable insight can be reached by many idiosyncratic paths through life. Your personal path to it, trusting too much in Science itself, is an ironically interesting one, unlikely to be trod by most. That's why your line "Science Isn't Strict Enough" fails to resonate with some readers.

Jared, why should you trust yourself more than someone else? And if there is someone more worthy of trust than you, wouldn't it be a more rational strategy to let him think for you instead of thinking for yourself?

If my own judgment is so faulty that I choose to let somebody else do all my thinking for me, then how can I even trust the thinking behind my choice?

If you think that Science rewards coming up with stupid theories and disproving them just as much as more productive results, I can hardly even understand what you mean by Science beyond the "observe, hypothesize, test, repeat" overview given to small children as an introduction to the scientific method. Was Eliezer-18 blind to anything beyond such simple rote forumulas?

Negative results are forgiven but hardly ever rewarded (unless the theory disproven is widely believed).

If you'd put aside the rather bizarre bitterness and just say: "Bayesian rationality is a good way to pick which theories to test. Here's some non-toy examples worked through to demonstrate how" that would be much more useful than these weird parables and goofy "I am an outcast" rants.

"@Caledonian: If it is an old and trivial insight, why do most scientists and near all non-scientists ignore it?"

They don't. The mismatch between you and them is that they're busy thinking about something else at the moment. I like the rule Turney gave above: "Doubt everything, but one at a time, not all at once." Of course, a single person can't follow that rule completely (there's not enough time in a lifespan to doubt EVERYTHING), and most people pick the wrong things to doubt or are lazy in applying the rule.

Of course, that rule's going to get in the way of reaching truth in some cases (some falsehoods come in self-reinforcing pairs both of which must be doubted in order to falsify either, and some things can't profitably be denied even for the sake of argument), but that's the case with any process, and this is something we've known since Goedel.

This kind of confuses me about this series... If all he was telling us was that Science is a powerful set of rules, and that therefore it can't eliminate all contradictions nor state all facts, I'd simply agree with him. But he seems to be saying that Bayesianism is different from Science, that somehow applying it instead of Science will have better results. It seems to me that both are processes, and both have blind spots.

I find it difficult to be sympathetic towards someone who complains he wasn't warned that the rule "do not take things on faith" wasn't supposed to be taken on faith.

We could provide a warning, of course. But how would we then ensure that people understood and applied the warning? Warn them about the warning, perhaps? And then give them a warning about the warning warning?

We could talk until we're blue in the face, but the simple truth is that you cannot force people to apply a method consistently, rigorously, or intelligently. No amount of adding onto the lesson will make people apply it properly, it merely offers them more things to misunderstand, ignore, and apply inconsistently.

We could provide a warning, of course. But how would we then ensure that people understood and applied the warning? Warn them about the warning, perhaps? And then give them a warning about the warning warning?

That's the problem with discrete reasoning. When you have probabilities, this problem disappears. See http://www.ditext.com/carroll/tortoise.html

@billswift: Emotion might drive every human action (or not). That's beside the point. If an emotion drives you into a dead end, there's something wrong with that emotion.

My point was that if someone tells you the truth and you don't believe them, it's not fair to say they've led you astray. Eliezer said he didn't "emotionally believe" a truth he was told, even though he knew it was true. I'm not sure what that means, but it sounds like a problem with Eliezer, involving his emotions, not a problem with what he was told.

Jared, it is possible to see that someone is more intelligent and trustworthy than you, without therefore being yourself more intelligent and trustworthy than him.

Eliezer didn't trust science too much. He didn't trust it enough. Instead of taking the duties and requirements of skepticism seriously, he treated the scientific method as another faith system.

I'm sure that was a very comforting and familiar approach to take, but it was still wrong. Completely, fundamentally wrong. It's utterly incompatible with the skepticism, open-mindedness, and radical doubt that is essential to the scientific method. And it seems to have had long-lasting implications for the sorts of positions Eliezer takes.

One suggestion for the flaw:

Conclusions from this article:
a) you are never safe
b) you must understand a) on a emotional basis
c) the only way to achieve b) is through an experience of failure after following the rules you trusted

The flaw is that the article actually does the opposite of what it wants to accomplish: by giving the warning(a) it makes people feel safer. In order to convey the necessary emotion of "not feeling safe"(b) Eliezer had to make the PS regarding the flaw.

In a certain sense this also negates c). I think Eliezer doesn't really want us to fail(c) in order to recognize a), the whole point of overcomingbias.com is to prevent humans from failing. So if Eliezer did a good job in conveying the necessary insecurity through his PS then hopefully c) won't happen to you.

Roland

Roland, agreed.

Does anyone disagree that science does not have nearly as strict quantitative constraints as Bayescraft on what you may believe?

@Vladimir Nesov:

why do you say that the problem disappears when you have probabilities?

I guess you still have the same basic problem which is: what are your priors? You cannot bootstrap from nothing and I think that is what the tortoise was hinting at, that there are hidden assumptions in our reasoning that we are not aware of and that you can't think without using those hidden assumptions.

Roland,

Probabilities allow grades of beliefs, and just as Achilles's pursuit of tortoise can be considered as consisting of infinite number of steps, if you note that steps actually get infinitely short, you can sum them up to a finite quantity. Likewise, you can join infinitely many infinitely unlikely events into a compound event of finite probability. It is a way to avoid regress Caledonian was talking about. Evidence can shift probabilities on all metalevels, even if in some hapless formalism there are infinitely many of them, and still lead to reasonable finite conclusions (decisions).

Likewise, you can join infinitely many infinitely unlikely events into a compound event of finite probability. It is a way to avoid regress Caledonian was talking about.

No, Mr. Nesov, it is not. You and I are talking at cross purposes.

Caledonian, you are not helping by disagreeing without clarification. You don't need to be certain about anything, including estimation of how much you are uncertain about something and estimation of how much you are uncertain about the estimation, etc.

"The experimental method shall decide which hypothesis wins"?

When there are experiments that can reasonably be done, or have already been done, then this works, right?

"Do you think it's a neuro-quirk, or just a normal quirk?"

Wait, there's another kind?

"Doubt everything, but one at a time, not all at once."

Interestingly, Robin Hanson has an existing post on this subject:

To have the best chance of succeeding in a radical project, you should instead choose just a few related dimensions on which to make radical choices, and then make conservative conventional choices on all the other dimensions. This strategy minimizes the chance that some other project dimension will go badly wrong and take down your central radical idea with it.

About flaws in the post: the idea that environmentalists shouldn't oppose a scaling up of nuclear power is a flaw. This paper: http://www.stormsmith.nl/ was lucky enough to be written (I was lucky enough to find it since every nuclear utility on the planet ignored unearthing the basic economic analysis contained within it). Basically, scaling up of nuclear fails to cost the complete life-cycle of decommissioning a powerplant. And, additionally, going to all nukes ensures the nuclear lobby (the process of lobbying is something Libertarians don't understand) becomes a nearly permanent chunk of the world's economy.
My point is that Bayesian reasoning here only unearths the nuclear facts and lies the nuclear industry forwards. You need reliable information sources in addition to Bayes to make correct judgement calls on eoconmics/energy here.

Cryonics is another flaw. IDK if it works or not. But the expensive process certainly shouldn't be a part of Universal Healthcare coverage at present. The best research (not 1970's deductive reasoning)about brains I've read to date, suggests thought is a substrate specific (IDK how much, semiconductors no way but maybe more inclusive than CNS proteins) process that functions as temperature-dependant solitons. Whatever temperature the brain goes down to in the cold water of ice-slip hypothermia survivors, does not necessarily mean brain processes will survive liquid nitrogen or helium temperatures. Under the mathematical model of reality most transhumanists have, temperature (requires physics) doesn't even exist!! I hope cryonics works, and if I were rich enough I might sign up or fund suspension research, but to suggest those who don't believe in cryonics are fools is to suggest brains work identically at 25C and -273C. Then the "rationalist" rebuttal is always to invoke "uploading for immortality" (where Transhumanism dies to me, despite all its progressive memes). If rationalists can't understand mathematics isn't physics, I don't want to be labelled a rationalist and I will pepper posts such as this to avoid unsuspecting readers from mindlessly believing a mindless belief system; I am trying to prevent H+ from functioning as a cult.

Intrade free money?! Surely you must know it takes $5000-$15000/yr in basic costs alone to live in most of the Western world. Surely you must know the average savings rate is very low. Please qualify statements like this with "rich/middle classes can make investments on liquid (can Insite really handle trillions of $$ as suggested and wouldn't it then be subject to manipulation: I'll bet the 65 cents in my pocket Phillip will splash coffee on himself). Bayes may be important, but if it misses very basic facts (like 9/10ths of the world can't presently afford to live off investment income), why would the world want to incorporate more Bayes, a small subset of probability theory already in math, logic and computer science curriculums (I think)?
Sweet. I did splash coffee on myself and double up to $1.30. Now I can donate to the political party most likely to teach (for the purpose here of reasoning skills, not ethics) probability theory instead of religious dogma, in public schools. That is the conclusion of the post: better public education at the expense of pop-culture. Or more funding for public education commercials?

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31