« The Bad Guy Bias | Main | Are AIs Homo Economicus? »

December 09, 2008

Comments

But if you're going to bother visualizing the future, it does seem to help to visualize more than one way it could go, instead of concentrating all your strength into one prediction.

So I try not to ask myself "What will happen?" but rather "Is this possibility allowed to happen, or is it prohibited?"

I thought that you were changing your position; instead, you have used this opening to lead back into concentrating all your strength into one prediction.

I think this characterizes a good portion of the recent debate: Some people (me, for instance) keep saying "Outcomes other than FOOM are possible", and you keep saying, "No, FOOM is possible." Maybe you mean to address Robin specifically; and I don't recall any acknowledgement from Robin that foom is >5% probability. But in the context of all the posts from other people, it looks as if you keep making arguments for "FOOM is possible" and implying that they prove "FOOM is inevitable".

A second aspect is that some people (again, eg., me) keep saying, "The escalation leading up to the first genius-level AI might be on a human time-scale," and you keep saying, "The escalation must eventually be much faster than human time-scale." The context makes it look as if this is a disagreement, and as if you are presenting arguments that AIs will eventually self-improve themselves out of the human timescale and saying that they prove FOOM.

No diminishing returns on complexity in the region of the transition to human intelligence: "We're so similar to chimps in brain design, and yet so much more powerful; the upward slope must be really steep."

Or *there is no curve* and it is a random landscape with software being very important...

Scalability of hardware: "Humans have only four times the brain volume of chimps - now imagine an AI suddenly acquiring a thousand times as much power."

Bottle nosed dolphins have twice the brain volume as normal dolphins (and comparable to our brain volume), yet aren't massively more powerful compared to them. Asian elephants have 5 times the weight...

Phil's comment above seems worth addressing. My <1% figure was for an actual single AI fooming suddenly into a takes-over-the-world unfriendly thing, e.g., kills/enslaves us all. (Need I repeat that even a 1% risk is serious?)

Eli,

Over the last several years, your writing's become quite a bit more considered, insightful, and worth reading. I fear, though, that the information density has become, if anything, even less than it might have previously been. I really *want* to "hear" (i.e., read) what you have to say --- but just keeping up is a full-time job. (Fyi, this isn't any damning critique; I've had this criticism aimed at me before, too.)

So a plea: do you think you could find a way to say the very worthwhile things you have to say, perhaps more *concisely?* This is argument --- worthwhile argument --- not polemic and not poetry. Apply whatever compression algorithm you think appropriate, we'll manage...

jb

I'll second jb's request for denser, more highly structured representations of Eliezer's insights. I read all this stuff, find it entertaining and sometimes edifying, but disappointing in that it's not converging on either a central thesis or central questions (preferably both.)

I think one way to sum up parts of what Eliezer is talking about in terms of AGI go FOOM is as follows:

If you think of Intelligence as Optimization and we assume you can build an AGI with optimization power near to or at human level (anything below would be too weak to affect anything, a human could do a better job) then we can use the following argument to show that AGI does go FOOM.

We already have proof that human level optimization power can produce near human level artificial intelligence (premise), so simply point it at an interesting optimization problem (itself) and recurse. As long as the number of additional improvements per improvement done on the AGI is greater than 1, FOOM will occur.

It should not get stuck at human level intelligence as human level is nowhere near as good as you can get.

Why wouldn't you point your AGI (using whatever techniques you have available) at itself? I can't think of any reasonable ones which wouldn't preclude you building the AGI in the first place.

Of course this means we need human level artificial general intelligence, but then it needs to be that to be anywhere near human level optimization power. I won't bother going over what happens when you have AI that is better at some of what humans do but not all, simply look around you right now.

"Or, to make it somewhat less strong, as if I woke up one morning to find that banks were charging negative interest on loans?"

They already have, at least for a short while.

http://www.nytimes.com/2008/12/10/business/10markets.html

The idea of making a mind-design n-space by putting various attributes on the axis, such as humorous/non-humorous, conceptual/perceptual/sensual, etc. -- how much does this tell us about the real possibilites?

What I mean is, for a thing to be possible, there must be some combination of atoms that can fit together to make it work. But merely making an N-space does not tell us about what atoms there are and what they can do.

Come to think of it, how can we assert *anything* is possible without having already designed it?

When someone designs a superintelligent AI (it won't be Eliezer), without paying any attention to Friendliness (the first person who does it won't), and the world doesn't end (it won't), it will be interesting to hear Eliezer's excuses.

Unknown, do you expect money to be worth anything to you in that situation? If so, I'll be happy to accept a $10 payment now in exchange for a $1000 inflation-adjusted payment in that scenario you describe.

I second JB's request regarding concise writing. Eliezer's posts invariably have at least one or two really insightful ideas, but it often takes a few thousand more words to make those points than it should.

I'd like to add to JB's and Peanut's points: for example, the 1000-word dialogue in Sustained Strong Recursion struck me as especially redundant, when the same could be communicated in a couple of clear formulas, or just referring to an elementary and well-known concept of compound interest.

Most of the variety of Eliezer's output is useful to some audience, but there's a serious problem of getting the right people to the right documents.

Eliezer, I am sending you the $10. I will let you know how to pay when you lose the bet. I have included in the envelope a means of identifying myself when I claim the money, so that it cannot be claimed by someone impersonating me.

Your overconfidence will surely cost you on this occasion, even though I must admit that I was forced to update (a very small amount) in favor of your position, on seeing the surprising fact that you were willing to engage in such a wager.

Unknown, where are you mailing it to?

Eliezer: c/o Singularity Institute
P.O. Box 50182
Palo Alto, CA 94303 USA

I hope that works.

Eliezer: did you receive the $10? I don't want you making up the story, 20 or 30 years from now, when you lose the bet, that you never received the money.

Not yet. I'll inquire.

I have included in the envelope a means of identifying myself when I claim the money, so that it cannot be claimed by someone impersonating me.

Doesn't that technically make you now Known?

Also, how much time has to pass between an AI 'coming to' and the world ending? What constitutes an AI for this bet?

Eliezer, will you be donating the $10 to the Institute? If so, does this constitute using the wager to shift the odds in your favour, however slightly?

Yes, the last two are jokes. But the first two are serious.

Ben Jones, the means of identifying myself will only show that I am the same one who sent the $10, not who it is who sent it.

Eliezer seemed to think that one week would be sufficient for the AI to take over the world, so that seems enough time.

As for what constitutes the AI, since we don't have any measure of superhuman intelligence, it seems to me sufficient that it be clearly more intelligent than any human being.

I'll pay $1,000 if I manage to still be alive and relatively free several weeks after an unfriendly superintelligence is released. Purely as a philonthropic gesture of gratitude for my good fortune or relief that my expectations were unfounded.

Cameron, that's great but you've got to sell that payout to the highest bidder if you want to generate any info.

Unknown, I have received your $10 and the bet is on.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31