« Sensual Experience | Main | Show-Off Bias »

December 21, 2008


But how much has your intuitive revulsion at your dependence on others, your inability to do everything by yourself, biased your beliefs about what options you are likely to have. If wishes were horses you know. It is not clear what problems you can really blame on each of us not knowing everything we all know; to answer that you'd have to be clearer on what counterfactuals you are considering.

"But with a sufficient surplus of power, you could start doing things the eudaimonic way. Start rethinking the life experience as a road to internalizing new strengths, instead of just trying to keep people alive efficiently."

It should be noted that this doesn't make the phenomenon of borrowed strength go away, it just outsources it to the FAI. If anything, given the kind of perfect recall and easy access to information that an FAI would have, the ratio of cached historical information to newly created information should be much *higher* than that of a human. Of course, an FAI wouldn't suffer the problem of losing the information's deep structure like a human would, but it seems to be a fairly consistent principle that the amount of cached data grows faster than the rate of data generation.

The problem here- the thing that actually decreases utility- is humans taking actions without sufficient understanding of the potential consequences, in cases where "Humans seem to do very well at recognizing the need to check for global consequences by perceiving local features of an action." (CFAI 3.2.2) fails. I wonder, out of a sense of morbid curiosity, what the record is for the highest amount of damage caused by a single human without said human ever realizing that they did anything bad.

Robin, I'm not blaming the problem on each of us not knowing everything. To restate my thesis:

(1) The current scenario isn't set up for eudaimonic living;
(2) Newton had more fun discovering calculus than you had reading about it;
(3) A lot of the reason why people think of technology as a Grey Death Force has to do with their estrangement from their own tools;
(4) The future need not be one of opaque gadgets with buttons to press that do complicated things.

Also, I had to learn to distrust knowledge that had only been told me; I could only wish it had been instinctive.

Tom, so long as the AI isn't sentient and would in fact be superintelligent enough to regenerate all the knowledge it has learned, we need be concerned neither with its eudaimonia nor its overreaching.

Tom, it's quite possible that the CEV would determine the Right Thing To Do is uplifting humans, in the David Brin sense, until they can once again wipe their own cosmic asses - and then for itself to bow out gracefully, and halt.

"But if you deleted the Pythagorean Theorem from my mind entirely, would I have enough math skills left to grow it back the next time I needed it?"

It's easy if you're allowed to keep the law of cosines ...

"Borrowing someone else's knowledge really doesn't give you anything remotely like the same power level required to discover that knowledge for yourself."
Hmmm. This doesn't seem to me to be the way it works in domains of cumulatively developed competitive expertise such as chess, go, gymnastics and the like. In those domains the depth with which a technique penetrates you when you invent it is far less than that with which it penetrates your students to whom you teach it when they are children, or at least, that's my impression. Of course, if we could alternatively raise and lower our neoteny, gaining adult insights and then returning to childhood to truly learn them our minds might grow beyond what humans have yet experienced.

What you seem to be describing here is leverage. It's clear (more clear these days) that leverage is good for growth, but too much leverage and you are hosed.

Michael, that's an interesting way of looking at it. In retrospect I was reasoning something like this for the Overcoming Bias project - "No matter how much I write about rationality, I can't communicate the generator that I used to write the posts... but if someone reads it as a teenager and then grows up trying to develop it further, they might find it anyway" - but without the explicit generalization that could also apply to Go.

One really does want to try it from age seven, but I'm not sure how much of this stuff even I could have gotten at age seven. It'd be worth trying, though.

"One really does want to try it from age seven, but I'm not sure how much of this stuff even I could have gotten at age seven. It'd be worth trying, though."

I fully intend to teach my children (when I eventually have them, that is) about cognitive biases and rationality from the time they are born. I think that we greatly underestimate what children are capable of understanding. (It is also possible that I am biased, since I was an unusual child and so I can't generalize my experience across all children - but even if that's true, there is at least a good chance it will work with MY children, so much the better for them.) In the future, our children might be taught concepts in their earliest books that we have not even discovered today.

I suspect the knowledge you get from reading someones writings is very different than the knowledge you get from working with them or them teaching you. When you work or learn closely with someone they can see your reasoning processes and correct them when they go astray at the right point when they are still newly formed and not too ingrained. Otherwise it relies too much on luck. When in someone intellectual career should you read OB, too early it won't mean too much lacking the necessary background and too late you will be inured against it (assuming it is the right way to go!).

Autodidacts are going to be most intellectually useful when you need to break new ground and the methodologies of the past aren't the way to solve the problems needed to be solved.

I'd say kids are never too young. First, they are already evolved to grow up in an environment of adult ideas, and to pick them up as they are capable. Second, talking to them about complex ideas will teach them not to fear that complexity, even if they don't understand everything. Much of our culture is built around stupidity being cool and irrationality being goodness.

I see a project like seasteading, and it reminds me a lot of a similar failed project I got excited about 10 years ago. If they want to live on the sea, why not buy a boat? Do you HAVE to add the complexity of a new vessel design to all the legal and social challenges? I mean long term habitation of the oceans is a solved problem, use the solution.

The first place I encountered the concept that strength must be earned was eight or nine years ago in a passage from, of all things, Jurassic Park, which stuck in my memory long after the other moments of the book faded from memory.
The long version: http://www.stjohns-chs.org/english/Seventeenth/jur.html
The short version: """
"I’ll make it simple" Malcolm said. "A karate master does not kill people with his bare hands. He does not lose his temper and kill his wife. The person who kills is the person who has no discipline no restraint, and who has purchased his power in the form of a Saturday night special. And that is the kind of power that science fosters, and permits. And that is why you think that to build a place like this is simple."
"It was simple," Hammond insisted.
"Then why did it go wrong?"

"One really does want to try it from age seven, but I'm not sure how much of this stuff even I could have gotten at age seven. It'd be worth trying, though."

Eliezer, it sounds like you need to write a childrens' book.

This post has an enormous noise to content ratio. You gave only one example of a cost from using borrowed strength, and it was unsupported:

"But if no one had been able to use nuclear weapons without, say, possessing the discipline of a scientist and the discipline of a politician - without personally knowing enough to construct an atomic bomb and make friends - the world might have been a slightly safer place."

This is not clear; I would even say it's less than 50% probable. Many scientists, using heuristics against bias that turned out to be wrong in this case, underestimated the aggressiveness of the Soviet Union. Think Bertrand Russell, Albert Einstein, and maybe Oppenheimer. I am cherry-picking; but I don't think a Union of Concerned Scientists could have gotten us through the 1960s without a war with the Soviet Union.

Since kids were brought up, this made me think of the question - any suggestions of how to best teach children rationality from an early age? (This is probably worth a separate post.)

Michael Vassar has a good point that I will take in a different direction: inventors and exploiters are often quite different people. Great explorers are rarely great settlers.

The first person to develop an idea or technology rarely has the best idea of what to do with it. Perhaps s/he is too tied to that development process. Perhaps it takes a different part of the mind to optimize than to discover, and few people have strong modules for both. Sometimes the first mover wins, but the biggest winner is often the later mover who releases a better version.

Citing games again, I look to different sources for ideas and for finished optimizations. Some people can do both, and the more limited the search space the more likely it is that optimizers can find their own ideas. Several people will suggest that X and Y could work well together. They will experiment with things in-game. They will often have the best qualitative grasp of things. Then you bring in the spreadsheet masters to squeeze the last drop of optimization from it. These are the people who calculate the most efficient build for feats, talents, weapons, whatever your game has. Then you can pass that back to the community for people to use.

It takes a certain personality type to explore a new land. It takes a different personality type to start homesteading newly explored territory. It takes yet another to devise regular trade routes back to the mother country.

For anyone wondering where the quote is from: Morisato-san is Keiichi Morisato of the anime/manga franchise _Oh My Goddess!_, and the speaker is Belldandy.

I missed something obviously. Is there a post I could look at to find Eleizer's understanding of the source of his disagreement with Robin about the singularity?

Cameron, see True Sources of Disagreement.

Also, when in doubt, do what I do and check Andrew Hay's list of my posts.


My understanding of the disagreement appears different than Eli's. My impression is that the core of the disagreement lies in Robin's statement:

"This history of when innovation rates sped up by how much just doesn't seem to support your claim that the strongest speedups are caused by and coincide with new optimization processes, and to a lesser extent protected meta-level innovations"

The fwoom!, god-to-rule-us-all, and issues around models all seem to me to fall out of this contention.

After the discussion around the disagreement, I gave Hal Finney my original and new estimates on these issues. I would be willing to repeat them here if Hal will likewise repeat his, but I'm unsure if anyone is interested anymore.

[Deleted. Tim, you've been requested to stop talking about your views on sexual selection here. --EY]

Eliezer, I think you have somehow gotten very confused about the topic of my now-deleted post.

That post was entirely about cultural inheritance - contained absolutely nothing about sexual selection.

Please don't delete my posts - unless you have a good reason for doing so.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30