« For The People Who Are Still Alive | Main | Entrepreneurs Are Not Overconfident »

December 15, 2008

Comments

Robin, did you see my wager with Eliezer? You might want to profit from his overconfidence yourself, if he is willing to make more than one such bet.

I did recently call him dramatic, and if offense was taken, I apologize. I greatly respect Eliezer.

I *do* think that his explanation for why people will want to revive him in the future was less than perfectly rational, and I think he was a very small bit dramatic in explaining them. Instead of reasoning why people would do so, he seemed to me to be saying "I'm going to try and make it a world where people do so, because people should want to".

Which is fine, but I think it's a bit dramatic for a person who prides themselves on rationalization to argue that it's OK to rely on themselves to have an impact on such amazingly complex issues.

So, again, if I misread him, or am just wrong, I apologize. But that part of his "why you should get cryonics" article struck me as a tiny bit dramatic.

I also think the comment in which I stated that opinion was poorly written, and was received as much more inflammatory than it was intended, which I regret.

"this seems impolite unless requested."
I assume the request must come from EY himself, else you have it: I'd love to hear it.
(but this does come from a semi-troll whose favorite post here may be the "gotta catch a plane" dialog.)

If Robin's abstractions are good, then Eliezer should be able to describe the foom event in economic/evolutionary terms without resorting to his abstractions, and (I think) that should convince Robin.

If Robin's abstractions break down in the case of a self modifying AI, Eliezer should find other examples of them breaking down that Robin already acknowledges, and that are similar in some relevant way to self modifying AI.

Perhaps each party should outline situations in which their own abstractions don't apply or aren't accurate.

Doesn't Eliezer's world view make him (Eliezer) the most important person in the world? The problem of Friendly AI is the most important problem we face, since it determines whether the future is like Heaven or Hell. And Eliezer is clearly the person who has put the most work into this specific problem, and has made the most progress.

Having beliefs that make you yourself the most important person in the world, possibly the most important person in history, has got to be a powerful source of bias. I'm not saying such a bias can't be overcome, but the circumstance does increase the burden of proof on him.

@burgerflipper: I think that Robin and I both know what my temptations to bias are; hence there's little enough need to list them.

@James, I think I already went into that in around as much detail as I can do. My fundamental objection to Robin's worldview is that it doesn't deal with what happens when agents get smarter, larger, better-designed, or even faster. So Robin's methodology would break down if e.g. he tried to describe the effect of human intelligence on the planet, not using abstractions that he's already fine-tuned on humans, but using only the sort of abstractions that he would have approved of using before humans came along. If you're allowed to "predict" humans using experimental data from after the fact, that is, of course, hindsight.

@Hal: From my perspective, I'm working on the most important problem I can find, in order to maximize my expected utility. It's not my fault if others don't do the same.

Also, I keep saying this, but we're talking about Heaven versus Null. Hell takes additional bad luck beyond that.

We're running a rationalist culture here; so to make something look bad here, you overdramatize it, so that people will suspect its adherents of bias, because we all know that (outside rationalist culture) things are overdramatized to sell them. So here we have the opponents casting the scenario in a dramatic light, and the proponents trying to make it sound less dramatic. This is something to keep in mind.

I don't see many attempts to underdramatize the foom scenario.

I suspect I have a pretty good idea of the gist as well, but I'd love to read how he'd choose to say it.

Changing gears, since a complete and true copy of a person is that person, and preserving brains so they might live in the future is a worthy goal, why not concentrate on creating the first unfriendly AI? (Since being first is vital.)

Then just give it one Asimov-like rule: when you disassemble a person you must also simulate him.

That leaves tricky questions about setting the rules of the sand box we get plunked down in. But would the sandbox rules need to be as perfect as the AI's utility function?

Eliezer:
But the question is: Where does ROBIN think Robin's abstractions break down? He thinks he's accounted for your scenario, but he probably doesn't think his abstractions are perfect. It should be a strong argument if you can show that your foom is in one of the regions where his abstractions break, but first he should concede those regions.

Burger flipper: "That leaves tricky questions about setting the rules of the sand box we get plunked down in. But would the sandbox rules need to be as perfect as the AI's utility function?"

Yes.

If fooming AI is an eventuality and the cutting edge of friendly technology consists of inspiring the next generation, it might be time for a contingency plan.

If the AI is going to be so powerful it cannot be contained in any box, if it could use human atoms for building blocks, and if simulated me=me; maybe we should trade the atoms for a box, one to be placed in ourselves: simulations given a simulated earth. We don't even have to swallow the blue pill. Leave the memories so we either don't build another AI within the AI simulation, or if we do build one, leave the memory of the solution intact so we just live in a recursive loop.

burger flipper: trade?

If I choose to keep a set of ants in a glass sandbox on my desk, I do it because it amuses me. There is absolutely nothing those ants could hope to offer me which would make one iota of difference in the matter, and indeed, which I could not take from them myself.

What on earth could we possibly trade with an unfriendly AI?

Eliezer is pretty dramatic a lot of the time. I mean, sometimes he uses King James Version grammar when speaking of himself. That's a little disturbing.

However, I have often observed that, in subjective fields, people who absurdly overstate their claims, like Freud and Kuhn and Lovelock and Skinner and Derrida and most famous philosophers; or make claims that are absurd to begin with, like James Tipler and (sometimes) Jerry Fodor and most of whatever famous philosophers are left; get more attention than people like, I don't know, Spinoza, who make reasonable claims.

I mean, sometimes he uses King James Version grammar when speaking of himself.

Not that I think that's a bad thing, but still, I'm not entirely sure what you're talking about here. Thou shalt give me examples.

Mike Blume, I'm simply proposing an ad hoc stipulation be built in at go: when you pull me apart for the carbon atoms, plunk down a sim of me in the ant farm. Instead of us building an escape proof box for it, let it build one for us.

Not ideal perhaps. But if the first AI is likely to take over, the world's leading proponent of friendly AI is in the providing-inspiration-for-the-young'ins stage, and copies/uploads/simulations of a person are that person, it seems like the best bet.

And then EY can redirect his efforts toward creating the first AI--friendly or not.

Ok, that makes more sense, it just seems to me like you run into massive wishing problems when you try to formulate the phrase "when you pull me apart for carbon atoms"

@Phil

Berashith Bera Eliezer Ath Ha Amudi Va Ath Ha AI.
In the beginning Eliezer Created the Friendly and the AI.

Viamr Eliezer Ihi AI Vihi AI.
And said Eliezer Let there be AI, and there was AI.

-- Overcoming Genesis, 1:1, 1:3, The AI Testament (^.^)

Eliezer: "Thou shalt give me examples."

Put this in a google search box:
site:overcomingbias.com "and lo"
site:overcomingbias.com "unto"

I'll retract the qualifier "when speaking of himself", which I used because the examples I remembered were from the recent string of autobiographical posts. It seems to be a general inclination to use occasional archaic words.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31