« Near-Far Like Drunk Darts? | Main | Augustine's Paradox of optimal repentance »

March 01, 2009

Comments

I just wanted to mention that the property you call "anti-inductiveness" is referred to as "reflexivity" in some social science circles. The general gist is that, when a system is social in nature, any new model that describes the system will, if it comes to be widely believed, change the way the system works. Since the model was based on the past behavior of the system, the model usually fails shortly after becoming widely known.

P.S. nice blog.

Reading essays and thinking makes us better rationalists, but in order to overcome certain biases, it would be really helpful to have exercises. Well designed exercises can help us practice thinking properly, and measure our biases so that we can know whether we've eliminated them. However, this requires prepared materials created with that goal in mind. With that in mind, I have designed (but not yet gathered materials for) some exercises to help practice countering biases.

Confidence Calibration Test
Take a quiz where you both answer questions and estimate your confidence as low, medium, or high. Grade the quiz, and look at the percentage of the questions you got right, at each confidence level. In the future, use these percentages to anchor any estimates of the probability that you're right. For best results, the question bank should include a well balanced collection of topics, including at least one topic you know nothing about and at least one topic you know very well.

Confirmation Bias Test
Take two opposing editorials, of similar but not particularly high quality. First, read the one you agree with, and cross off all invalid arguments. Then read the one you disagree with, and do the same. Compare your answers against an answer key. Give yourself one point for each argument you marked correctly. In the article that you agreed with, lose five points for each bad argument you missed, but don't lose points good arguments that you mislabeled as bad. In the article you disagreed with, lose five points for each valid argument you rejected, but don't lose points for bad arguments you accepted.

Requires two editorials articles, which take opposite sides on the same issue. Requires someone to create an answer key which identifies each argument and labels it as valid or invalid.

Anchoring Bias Test
This test asks you to estimate various quantities. There are already plenty of tests like this floating around, but we'll add a twist to it. Before each question is written a number, which may or may not be close to the quantity being estimated. Read this number before thinking about the question. When you're finished, divide the questions into three categories: questions answered correctly, questions where the answer given was in between the anchor and the true value, and questions where the answer given was not in between the anchor and the real value. If you usually answer in between the anchor and the real value, you suffer from anchoring bias. If you usually do not answer in between the anchor and the real value, you are overcompensating for anchoring bias. Repeat the test a few times with different lists of questions, until about the same number of your answers are in between the real value and the anchor as not. Requires a bank of questions with numerical answers that must be estimated, an answer key, and someone to format the test and mix in anchor values.


If anyone knows of suitable materials or has other ideas for exercises, they'd be greatly appreciated. It'd also be nice to have a thread on Overcoming Bias and/or Less Wrong for this purpose.

Overcoming Biases is easier for people to do if they have a viable psychological place to go to once those biases have been overcome. EY posted on this in his "building a line of retreat", "moral void" series, but I think it deserves a lot more attention.

In fact my experience tells me that almost all hard-to-remove bias in the real world is there because people hang their axiological beliefs upon false factual claims, i.e. they don't have a line of retreat. I think that about 50% of the work in overcoming biases should be in this area, i.e. should address the psychological effects of overcoming your biases and how to cope with them, with the other 50% being on how to actually identify and correct for your biases.

@jimrandomh:

I second this idea. I would very much like to take a "bias" examination so that I can see how good I am compared to the experts here.

One could probably use Fermi problems for the anchoring tests. It should be possible to find quite large databases of such from various physics competitions. They have the advantage of training systematic estimation of quantities as well.

Perhaps coincidentally, I requested test ideas this morning here. Thanks! :)

look at the percentage of the questions you got right, at each confidence level. In the future, use these percentages to anchor any estimates of the probability that you're right.

A better way to determine your calibration is the statistical concept of expected score. Here is my explanation of that. Make sure you scroll past my explanation a little to read the two comments by Peter de Blanc, whom I learned this from.

"A better way" --> "Another way"

The article in Wired on quants and the collapse -- sure sound like prediction markets in practice to me. Risk assessed through asset price leading to gigantic cluster fuck? I think a post on connecting prediction markets to the economic collapse would be very current and relevant.

posts on academia, in particular the social sciences.

A website that lets you test your calibration was described in an Open Thread comment from July 1, 2008. It is:

http://calibratedprobabilityassessment.org/.

Another one mentioned about this same time is:

http://www.acceleratingfuture.com/tom/?p=129.

Stephen Hawking made a bet with another cosmologist, for a magazine subscription, against a position that Hawking was advocating in his papers. That way, he helped counterbalance bias towards his own position with the enjoyment of winning the bet.

anyone interested in a Baltimore/DC area meetup?

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art7.html

I'd like to know if anyone has come across this book, and what they thought of it.

I just read something upsetting. Specifically, that Douglas Lenat never publicly released the source code of EURISKO. Is there a EURISKO variant that actually is available?

Consider the following problem:

You want to make an AI that plays a decent game of Whatever. However, you know very little about Whatever strategy. In fact, you're a complete beginner who knows little more than the rules themselves, so you think that your best shot is to create an AI that "learns" to tell the difference between good play and bad play on its own.

Where do you start with this?

anyone interested in a Baltimore/DC area meetup?
I'm going to make a post - I'm holding a DC area OB meetup Friday the 13th, in Springfield, at 7PM. Prove your rationality and come out! Reply to my post. (Not yet written; later today.)
I just read something upsetting. Specifically, that Douglas Lenat never publicly released the source code of EURISKO. Is there a EURISKO variant that actually is available?
Not AFAIK. I've looked for it too. I emailed Doug Lenat, but he never replied.

Friendly AI arguments Condensed

Registration at Less Wrong doesn't work for me and some other people, and because you need to register to comment, I can't do so at the Less Wrong post about problems with the site. So I'm doing it here.

I'm bothered by Many-Worlds because it reintroduces all the absurdities of classical mechanics. Namely; all states that exist have zero probability, and it takes an infinite amount of information to describe the universe.

We need some quantization of worlds. Either a quantization of space, or a quantization of probability. QM plus relativity says that a particle can be described with only a finite amount of information; but it doesn't define a set of possible locations for a particle with which you can tile probability space.

Perhaps the Nyquist sampling theorem can be used to define a finite or countable set of possible worlds that jointly encompass all possible worlds.

would anyone be interested in starting the Near Future of Humanity Institute? Our goal will be the creation of a seed-seed-AI that will figure out what sort of seed-AI we would want if we had the time and resources to properly figure it out.

In seriousness: the essay beyond anthropomorphism's conception of a blank slate seemed...well remarkably un-blank. It assumes that an AI would be able to self identify (delimit itself from the world around it). I think this is an anthropomorphism. Why would the AI see any physical component of itself as being "I"? Couldn't it simply view any and all actions as having greater and lesser degrees of impact upon its goals? In such a scenario "I" would involve any sub-component whose failure would lead to the super goal not being completed.

I'm not sure how clear this is.

Philip Goetz,
You praise QM, but condemn many worlds. Do you mean that collapse makes a difference to finiteness? This sounds wrong to me. There are various viewpoints on whether QM on a finite dimensional Hilbert space is "finitely much information," but I don't think that collapse changes the answer to that question.

The book mentioned in Robin's Post 'Near-Far Like Drunk Darts' suggests that perhaps some on 'Overcoming Bias' trust reasoning/logic too much, and Qualia/Intuition too little?

Seems to me the people on this blog favor causal modelling/Bayes because that's what the brains of high-IQ types are good at processing. I don't mean to be cynical, but perhaps one hidden motivation for extolling Bayes on Internet messageboards is to feel superior to the 'common man' and further social status via an avenue one feels one can show off in?

Several years ago was convinced of the 'bankruptcy' of arid rationalism (which I would point out, is now literally true , since blind reliance on simplistic Bayesian models may have been a factor in the financial collapse according to a 'New scientist' editorial). I refocused instead on the things humans do well (analogical reasoning, conscious experience).

Let me just make one interesting point: although there seem to be an infinite variety of physical modalities (smells, tastes, sounds etc), the emotional modalities (or what I call 'Valuation Qualia') seem to have a quite different character; unlike say vision, things like pain/pleasure appear to be distinct , scalar and binary and analogues to basic emotional modalities exist across the animal kingdom, independently of the (arbitrary) physical modalities. Reason to doubt Yudkowsky's claim of 'no unique architecture' for super-intelligences?

To what extent does our value system direct our intellect, in terms of making aesthetic judgements about, for example, what constitutes an interesting proof? To what extent does the smooth functioning of rationality actually depend these underlying value judgements? Perhaps, for rationality, there should be more focus on traits other than intelligence per se, which, I suggest, is only one limited aspect of rationality

Rationality, whilst it does have some value in itself, is merely a means to an end, it is not an ultimate end per se. The real goal is winning (i.e any sensible mind must have the goal system as its core; rationality is only a secondary tool in the service of the goal system). So this should always be remembered.


Doug, one way I know of is you let it play against itself with a neural net. This done succesfully with Backgammon (it reached expert level). But of course this wouldn't work for every type of game.

I would like to know how democracy is supposed to work in an era of artificial intelligence. Assuming that artificial people would be recognized as having basic human rights, thus the right to vote, is it possible to have a democratic system when one party is able to copy himself (presumably including opinions and policy preferences) in great numbers at will? I don't think creating only friendly AI solves this problem, although they would make for beneficial overlords, I suppose.

Not sure if many of you have seen Singularity Hub or not. We post almost daily on scientific and technical developments that I think many of you may find of interest.

"Seems to me the people on this blog favor causal modelling/Bayes because that's what the brains of high-IQ types are good at processing. I don't mean to be cynical, but perhaps one hidden motivation for extolling Bayes on Internet messageboards is to feel superior to the 'common man' and further social status via an avenue one feels one can show off in?"

If we were going to favor minority positions, why should we disfavor Bayesian modeling? Bayesian methods are still very unpopular, and historically have been a tiny minority of thinkers - I can't help but think that Bayesian methods were applied to spam only 7 or 8 years ago, even though Bayes's formula has been around how many centuries, and spam has been a problem for how many decades? Yes, perhaps commentators here have bought into the Bayesian consporacy, but if it were so widespread, so easy to understand, then why does Eliezer have to constantly harp on it and write things like his Technical Explanation?

(Personal anecdote: I find Bayesian ideas very hard to understand, and even harder to actually apply, even though like most OB posters I am at least 'bright' and certainly a 'high-IQ type'.)

"Several years ago was convinced of the 'bankruptcy' of arid rationalism (which I would point out, is now literally true , since blind reliance on simplistic Bayesian models may have been a factor in the financial collapse according to a 'New scientist' editorial). I refocused instead on the things humans do well (analogical reasoning, conscious experience)."

So in your search for an unjustly marginalized way of thinking, you... refocused on methods humans have been trying (and largely failing with) for thousands of years? I see.

>So in your search for an unjustly marginalized way of thinking, you... refocused on methods humans have been trying (and largely failing with) for thousands of years? I see.

Is analogy formation a failed method? Category theory suggests that in fact the whole of mathematics can be entirely recast in terms of analogy formation. What after all is math other than equations which indicate analogies (mappings) between one thing (left side of equation) and another thing (right side of equation);

From Wikipedia:
Analogy

“Category theory takes the idea of mathematical analogy much further with the concept of functors. Given two categories C and D a functor F from C to D can be thought of as an analogy between C and D, because F has to map objects of C to objects of D and arrows of C to arrows of D in such a way that the compositional structure of the two categories is preserved.”

Before there anyone knew about Bayes theorem, exactly how did Thomas Bayes come up that formula? If all math is based on analogy formation then it appears that so too must be the Bayes rule.

But arguments aside, is there empirical evidence that all math is based on analogy formation? Yes. Apparently the human mind works by manipulating a small alphabet of a priori abstractions (ontological concepts) built in the human mind (as Kant long ago realized). It turns out that many (if not all) sophisticated math concepts are actually derived from these basic ontological categories we learned to manipulate (via analogy formation) in child-hood. See this paper by AI researcher Alan Sloman:

'The Well Designed Young Mathematician'

Make no mistake, Yudkowsky's ‘reign of Bayesian terror’ is coming to an end. Soon analogy formation will knock Bayes from its perch. As the figher pilot expression goes 'I've got tone, I've got tone!' ;)

I seem to recall that Douglas Hofstadter has been promoting analogy-formation as the root of cognition for quite some time now, and I'd be surprised if most folks on this blog hadn't read one or two of his books (though some are better than others).

At any rate, there's a distinction to be made between "what is the basic activity of the human mind" and "what is the most basic way to model rational processes". Working closely with the brain's basic subconcious processes may in fact be suboptimal if they're too heavily tied to where cognitive biases arise from in the first place!

Beware, Geddes is a known crank.

The current Book of the Week on BBC Radio 4 might be of interest to readers of Overcoming Bias: The Decisive Moment by Jonah Lehrer.

"Since Plato, philosophers have described the decision-making process as either rational or emotional: we carefully deliberate or we go with our gut instinct.
But as scientists break open the mind's black box with the latest tools of neuroscience, they are discovering that this is not an accurate picture of how the mind works. Our best decisions are a finely-tuned blend of both feeling and reason, and the precise mix depends on the situation."

Each program is available for 7 days following broadcast here: http://www.bbc.co.uk/radio4/arts/book_week.shtml

>At any rate, there's a distinction to be made between "what is the basic activity of the human mind" and "what is the most basic way to model rational processes".

Yes , this is about foundations.

Category theory encompasses the whole of mathematics (its just as powerful as Set Theory). I’ve just pointed out that with a suitable interpretation, Category Theory recasts the whole of mathematics in terms of analogies (see Wikipedia ref in my previosus post).

The Bayes rule is a mathematical equation. All math equations can be recast as Category Theory. And all Category Theory can be recast as analogies (see above). Therefore, Bayesian Induction is merely a special case of analogy formation.

The sooner fan-boys of Bayes admit their own absymal levels of human 'rationality' and the fact they failed to spot the basic school-boy-level argument proving that analogy-formation is the real foundation of logic, the better.

@mjgeddes

"The sooner fan-boys of Bayes admit their own absymal levels of human 'rationality'"

Forgive me for intruding in this conversation, mjg, but it strikes me that your real beef now has perhaps moved to Less Wrong? Just a thought. Carry on.

Seen on Reddit, relevant to Eliezer's posts about tiling the universe with smiley faces: Robot Programmed To Love Goes Too Far

A Japanese lab tries to program a robot to love. The robot ends up trapping an intern in a hug and refusing to let her go until help arrives and manually de-activates it. Bonus: the researcher in charge calls it "the final step...in one of the fundamentals of the Singularity.”

>Forgive me for intruding in this conversation, mjg, but it strikes me that your real beef now has perhaps moved to Less Wrong? Just a thought. Carry on.

Yudkowsky's domain of virtual God Hood, complete with fake 'Karma' and all? No thanks.

Take it from me, IQ is way over-rated. Look at the fate of Chris Langan, the smartest man in America (IQ tested at 195+!), close to the limits of human intelligence, (he could have *smoked* even Bostrom, Hanson and Yudkowsky in any field, with one hand tied behind his back), but wasted his life generating theories that make no sense to anyone but himself:

Chris Langan

And most so-called 'rationality' is really nothing but a bunch of biases dressed in bad arguments, not to mention the fact that it's really boring anyway. Having fun and good conscious experience is far more important.

Can someone point me to a post here on OB where the relationship between Newcomb and Prisoner's Dilemma is made clear? It's mysterious to me why it's often implied that defecting in a true PD and one-boxing in Newcomb are somehow inconsistent...

This has probably been addressed already, but I wanted to write it down anyways. When I first learned about overcomingbias it came as a bit of a shock. Since then I've tried to apply some of the ideas developed here in actual debates and conversations, updating priors etc... The good news is that I think I've changed my mind on many topics. The not so good news, is that when I analyze the way I've changed my mind I see that in many cases I have simply rectified deviations from some more primitive priors that were lurking in the background.

Big news! Stephen Wolfram has apparently completed implementation of an AGI Engine on top of Mathematica! I've pleased to report that it seems to operate on the same ideas I've been advocating (ontological building blocks, math, semantic web). LOL.

Wolfram's Alpha

excert from 'Wolfram's Alpha'...

"One of the most surprising aspects of this project is that Wolfram has been able to keep it secret for so long. I say this because it is a monumental effort (and achievement) and almost absurdly ambitious. The project involves more than a hundred people working in stealth to create a vast system of reusable, computable knowledge, from terabytes of raw data, statistics, algorithms, data feeds, and expertise. But he appears to have done it, and kept it quiet for a long time while it was being developed."

mjgeddes,
That wikipedia entry isn't particularly convincing to me that Langan is the smartest man in America.

Anon,

Langan is very likely close to the smartest person in America in terms of raw IQ, this has been confirmed several times by independent IQ testers. His adult IQ is at least 195, and could be as high as 210. This IQ is close to the human genetic limit.

The point is that IQ doesn't the person from holding all sorts of irrational beliefs unfortunately. Look at Yudkowsky, who as far as I can tell is actually serious in his Libertarian political belief and the idea that 'Bayes the secret to universe'. Another example would be David Chalmers and his 'property dualism' theory of consciousness. You know I actually fell for Chalmers for a while, but I've got an excuse, I'm not a super genius. For someone as smart as Chalmers to do on peddling that tripe, there's really no excuse.

The recent Wolfram example shows that the real advances are not coming from high-IQ 'blow-hards' on Internet messageboards. The real advances are being done by creative, original thinkers.

mjgeddes, I disagree on several points.
1. The Langan evidence seems weak to me. "several times", "independent IQ testers" "close to the human genetic limit" -that's not strong evidence. Those seem to me to be buzz words in a good story with obvious counterhierarchical appeal. Frankly, the story has been a bit of a cliche in its various iterations for a few generations now.
2. I share your skepticism about Yudkowsky and Chalmers, but I think you present it in a kind of caricaturish way (particularly Yudkowsky, I know less about Chalmers).
3. I share your admiration of Wolfram's work. And I think there are real advances coming from high-IQ 'blow-hards' blogging on the internet. Like this guy:
http://blog.wolfram.com/

A fun critique of Wolfram's book by someone who appears to be an expert on CA

http://www.cscs.umich.edu/~crshalizi/reviews/wolfram/

This anecdote illustrates my intuition that Hanson is wrong in his (fun for him because it's contrarian?) claim that relatively undiversified entrepreunership is rational for the individual:

http://andrewsullivan.theatlantic.com/the_daily_dish/2009/03/the-view-fro-38.html#more

I intuitively favor what I think are the more conventional views that a level of diversification which excedes most forms of entrepreneurship is a more rational approach to wealth-building for individuals, even though high levels of (individually irrational) entrepreneurship is a more rational approach to wealth-building for a society.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31