« Bay Area Meetup for Singularity Summit | Main | More Deafening Silence »

October 06, 2008

Comments

I think if history remembers you, I'd bet that it will be for the journey more than its end. If the interesting introspective bits get published in a form that gets read, then I'd bet it will be memorable in the way that Lao zi or Sun zi is memorable. In case the Singularity / Friendly AI stuff doesn't work out, please keep up the good work anyway.

If history remembers him, it will be because the first superhuman intelligence didn't destroy the world and with it all history. I'd say the Friendly AI stuff is pretty relevant to his legacy.

I'd say "thanks" but that is so completely not what I am trying to do.

Re: destroying the world and all its history

A superintelligence will more-likely be interested in conservation. Nature contains a synopsis of the results of quadrillions of successful experiments in molecular nanotechnology, performed over billions of years - and quite a bit of information about the history of the world. That's valuable stuff, no matter what your goals are.

Wow, one of those quiet "aha" moments. Just by explaining something I'd misunderstood, you've totally changed my direction. Seriously, thanks.

Thanks! And: In cases like this, it helps me to know which "Aha" you got from what - if you can say. I'm never quite sure whether my writing is going to work, and so I'm always trying to figure out what did work. Email me if you prefer not to comment.

Trying to do the impossible is definitely not for everyone. Natural talent is only the ante to sit down at the table. The chips are the years of your life. If wagering those chips and losing seems like an unbearable possibility to you, then go do something else. Seriously. Because you can lose.

This is an awesome quote that is going into my collection, but could you please restate this for posterity as something like the following, making clear that you mean "impossible" and not impossible:

Trying to do the "impossible" is definitely not for everyone. Natural talent is only the ante to sit down at the table. The chips are the years of your life. If wagering those chips and losing seems like an unbearable possibility to you, then go do something else. Seriously. Because you can lose.

I'm like you in that I can't stand the grimace and slog model of "perseverance" (and the way some people elevate "mortification of the flesh" as a virtue makes me flinch in horror), unlike in that every time I've hit even a medium-hard problem I've tended to bounce off and re-script on the assumption I can't.

So the "aha" was the idea that pushing into a problem can convert a sheer cliff into a hill climb, but that the danger comes each time something looks like another cliff. The proper response is not to bounce but to push. There is no cliff until you can prove it (and don't trust a facile proof).

Also, now I get to look back at my many "I can't"s and re-examine them for opportunities to push.

"""A superintelligence will more-likely be interested in conservation. Nature contains a synopsis of the results of quadrillions of successful experiments in molecular nanotechnology, performed over billions of years - and quite a bit of information about the history of the world. That's valuable stuff, no matter what your goals are."""

My guess is that an AI could re-do all those experiments from scratch within three days. Or maybe nanoseconds. Depending on whether it starts the moment it leaves the lab or as a Jupiter brain.

Attempting the "impossible": like chewing, chewing, and chewing, unable to swallow; it's not soft and small enough yet. When you do get to swallow a small bit, you will often regurgitate it. But some of it may remain in your system, enough to subsist on, just barely, and you may know not to take another bite of the same part of "impossible".

Re: Nature contains a synopsis of the results of quadrillions of successful experiments in molecular nanotechnology, performed over billions of years [...] My guess is that an AI could re-do all those experiments from scratch within three days. Or maybe nanoseconds. Depending on whether it starts the moment it leaves the lab or as a Jupiter brain.

Real experiments take time - especially when they must be performed in series. Small experiments can be done relatively quickly - but not all of nature's experiments have been small.

A superintelligence will perform experiments, of course, but it seems likely to me that it will also continue the job we have started, that of harvesting existing results. The approach has given us a lot of mileage so far - and seems to represent vastly less work than the alternatives.

It's true that it may eventually replace DNA and protein - and thus largely lose interest in much of the "old" biosphere, but again, that will probably not happen overnight - or indeed over three nights - and even then, it will probably want to preserve some of it, to help it understand its origins.

Thousands of years ago, philosophers began working on "impossible" problems. Science began when some of them gave up working on the "impossible" problems, and decided to work on problems that they had some chance of solving. And it turned out that this approach eventually lead to the solution of most of the "impossible" problems.

Eliezer, I remember an earlier post of yours, when you said something like:
"If I would never do impossible things, how could I ever become stronger?"
That was a very inspirational message for me, much more than any other similar sayings I heard, and this post is full of such insights.

Anyway, on the subject of human augmentation, well, what about them?
If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment (it doesn't has to be full scale neural rewiring, it could be just smarter nootropics).

Eliezer said:

'On the timescale of years, perseverance is to "keep working on an insanely difficult problem even though it's inconvenient and you could be getting higher personal rewards elsewhere".'

This is inconsistent with utility maximization assumptions. Elsewhere on O.B. it has been discussed that the pursuit of dreams can be a payoff in itself. We must notice that it is the expectation of highest personal rewards, not the rewards themselves, that drive our decision-making. Working on the seemingly very hard problems is rewarding because it is among the set of most rewarding activities available.

Perseverance, like everything, is good in moderation.

The funny thing is that the recent popularization of economics, all the Freakonomics books (Dan Ariely, Tyler Cowen, Tim Hartford, Robert Frank, Steve Landsburg, Barry Nalebuff), is summed up by Steve Levitt when he said he likes solving little problems rather than not solving big problems. Thus, economists still don't understand business cycles, growth, inequality--but they are big on why prostitutes don't use condoms, or sumo wrestlers cheat in tournaments, or why it is optimal to peel bananas from the 'other' end. It's better than banging your head against the wall, but I don't think anyone spends the first two years in econ grad school to solve these problems.

@eric falkenstein

"solving little problems"

But you know eric, solving the seemingly little problem often illuminates a great natural principle. One of my favorite examples of this is Huygens, when down with the flu, suddenly noticing how the pendulums of his clocks always ended up swinging against each other. Such a tiny thing, how important could it be? Yet in the end so-called coupled oscillation is everywhere from lasers to fireflies. Never underestimate the power of the small insight.

This from the 5th:

But if you can't do that which seems like a good idea - if you can't do what you don't imagine failing - then what can you do?

And this from today:

To do things that are very difficult or "impossible",

First you have to not run away. That takes seconds.

Then you have to work. That takes hours.

Then you have to stick at it. That takes years.

are nice little nuggets of wisdom. If I were more cynical, I might suggest they are somewhat commonsense, at least to those attracted to seemingly intractable dilemmas and difficult work of the mind, but I won't. It's good to have it summed up.

I find this rah-rah stuff very encouraging, Eliezer. Zettai daijyobu da yo and all that. Good to bear in mind in my own work. But I think it is important to remember that not only is it possible you will fail, it is in fact the most likely outcome. Another very likely outcome: you will die.

You need a memento mori.

Phil Goetz: I would still really LOVE to see you write up the book length version of the above comment as you once suggested.

"If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment"

This seems like the way to go to me. It's like "generation ships" in sci-fi. Should we launch ships to distant star systems today, knowing that ships launched 10 years from now will overtake them on the way?

Of course in the case of AI, we don't know what the rate of human enhancement will be, and maybe the star isn't so distant after all.

The things you have had to say about "impossible" research problems are among your most insightful. They fly right in the face of the more devilishly erroneous human intuitions, especially group intuitions, about what are and are not good ways to spend time and resources.

"Trying to do the impossible is definitely not for everyone. Natural talent is only the ante to sit down at the table"

It's a bit of a technical thing, but as a 20-year professional poker player, I would suggest that you change "ante" to "buy-in." It's what you mean.

An "ante" is the chips you put in to play before being dealt cards on any given single hand (and most poker games now, being Texas Hold'em, don't have antes at all).

A "buy-in" is everything you're putting at risk.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31