« Abstract/Distant Future Bias | Main | Beliefs Require Reasons, or: Is the Pope Catholic? Should he be? »

November 26, 2008

Comments

I think you should try to consider one possible thing:

In your story, Engelbart failed to produce the UberTool.

Anyway, looking around and seeing the progress since 1970, I would say, he PRETTY MUCH DID. He was not alone and we should rather speak about technology that succeeded, but what else is all the existing computing infrastructure, internet, Google etc.. than the ultimate UberTool, augmenting human cognitive abilities?

Do you think we could keep the Moore's law going without all this? Good luck placing those two billions transistors of next generation high-end CPU on silicon without using current high-end CPU.

Hell, this blog would not even exist and you would not have any thoughts about friendliness of AI. You certainly would not be able to produce a post each day and get comments from people all around of world withing several hours after that - and many of those people came here using Google ubertool because they share interest in AI, never heard about you before and came back only because all of this is way interesting :)

Actually, maybe the lesson to learned is that we sort of expect a singularity moment as single AI "going critical" - and everything will change after that point. But in fact, maybe we already are "critical" now, we just do not see the forest because of trees.

Now, when I say "we", I mean the whole human civilisation as "singleton". Indeed, if you consider "us" as single mind, this "I" (as in inteligence), composed of humans minds and interconnected by internet, is exploding right now...

Indeed, if you consider "us" as single mind, this "I" (as in inteligence), composed of humans minds and interconnected by internet, is exploding right now

Uh huh. See my video-and-essay: http://www.alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

Tim Tyler: Your essay is convincing. How do you suggest we should act to improve Friendliness?

Tim:

Thanks for the link. I have the website to my AI portfolio.

BTW, I guess you got it right:) I have came to similar conclusions, including the observation about exponential functions :)

Johnicholas:

I guess that in Tim's scenario, "friendliness" is no near as important subject. Without "foom" there is a plenty of time for debugging...

luzr: I do not agree. The situation is, we have a complex self-modifying entity (the web of human society, including machines) which is already quite powerful and capable and is growing more powerful and capable more and more rapidly. The "foom" is now.

There are not many guarantees that we can make about the behavior of society as a whole. Does society act like it values human life? Does society act like it values human comfort?

Eliezer: all these posts seem to take an awful lot of your time as well as your readers', and they seem to be providing diminishing utility. It seems to me that talking at great length about what the AI might look like, instead of working on the AI, just postpones the eventual arrival of the AI. I think you already understand what design criteria are important, and a part of your audience understands as well. It is not at all apparent that spending your time to change the minds of others (about friendliness etc) is a good investment or that it has any impact on when and whether they will change their minds.

I think your time would be better spent actually working, or writing about, the actual details of the problems that need to be solved. Alternately, instead of adding to the already enormous cumulative volume of your posts, perhaps you might try writing something clearer and shorter.

But just piling more on top of what's already been written doesn't seem like it will have an influence.

You would very probably have to reach into the brain and rewire neural circuitry directly; I don't think any sense input or motor interaction would accomplish such a thing.

You must admit: that would be a very impressive mouse.

"I think your [Eliezer's] time would be better spent actually working, or writing about, the actual details of the problems that need to be solved."

I used to think that but now I realize that Eliezer is a writer and a theorist but not necessarily a hacker so I don't expect him to necessarily be good at writing code. (I'm not trying to diss Eliezer here, just reasoning from the available evidence and the fact that becoming a good hacker requires a lot of practice). Perhaps Eliezer's greatest contribution will be inspiring others to write AI. We don't have to wait for Eliezer to do everything. Surely some of you talented hackers out there could give it a shot.

Thanks for your interest - but comments on someone else's blog post are not the place for my policy recommendations.

As far as risks go, the idea that the explosion has started probably makes little difference. Risks from going too fast would be about the same. Risks from suddenly changing speed might be slightly reduced.

The main implications of my essay on the topics discussed here are probably that it makes extrapolation from our recent history and not-so-recent evolutionary history seem like a more promising approach - and it makes the whole idea of some highly-localised significant future event that changes everything seem less likely.

Johnicholas:

"The "foom" is now."

I like that. Maybe we can get some T-shirts? :)

"There are not many guarantees that we can make about the behavior of society as a whole. Does society act like it values human life? Does society act like it values human comfort?"

Good point. Anyway, it is questionable whether we can apply any Elizier's friendlines guidelines to the whole society instead of single strong general AI entity.

PK you are absolutely right. We can even take things a step further and say positive AI will happen regardless of Eliezer's involvement, and even as far as to say his involvement not having the needed experience in both math and programming will be as a cheerleader and not someone who makes it happen.

Humanity is in a FOOM relative to the rest of the biosphere but of course it doesn't seem ridiculously fast to us; the question from our standpoint is whether a brain in a box in a basement can go FOOM relative to human society. Anyone who thinks that because we're already growing at a high rate, the distinction between that and a nanotech-capable superintelligence must not be very important, is being just a little silly. It may not even be wise to call them by the same name, if it tempts you to such folly - and so I would suggest reserving "FOOM" for things that go very fast relative to *you*.

For the record, I've been a coder and judged myself a reasonable hacker - set out to design my own programming language at one point, which I say not as a mark of virtue but just to demonstrate that I was in the game. (Gave it up when I realized AI wasn't about programming languages.)

Eliezer: all these posts seem to take an awful lot of your time as well as your readers', and they seem to be providing diminishing utility. It seems to me that talking at great length about what the AI might look like, instead of working on the AI, just postpones the eventual arrival of the AI. I think you already understand what design criteria are important, and a part of your audience understands as well. It is not at all apparent that spending your time to change the minds of others (about friendliness etc) is a good investment or that it has any impact on when and whether they will change their minds.
As you may have guessed, I think just the opposite. The idea that Eliezer, on his own, can figure out
  1. how to build an AI
  2. how to make an AI stay within a specified range of behavior, and
  3. what an AI ought to do
suggests that somebody has read Ender's Game too many times. These are three gigantic research projects. I think he should work on #2 or #3.

Not doing #1 would mean that it actually matters that he convince other people of his ideas.

I think that #3 is really, really tricky. Far beyond the ability of any one person. This blog may be the best chance he'll have to take his ideas, lay them out, and get enough intelligent criticism to move from the beginnings he's made, to something that might be more useful than dangerous. Instead, he seems to think (and I could be wrong) that the collective intelligence of everyone else here on Overcoming Bias is negligible compared to his own. And that's why I get angry and sometimes rude.

Generalizing from observations of points at the extremes of distributions, we can say that when we find an effect many standard deviations away from the mean, its position is almost ALWAYS due more to random chance than to the properties underlying that point. So when we observe a Newton or an Einstein, the largest contributor to their accomplishments was not their intellect, but random chance. So if you think you're relying on someone's great intellect, you're really relying on chance.

I would suggest reserving "FOOM" for things that go very fast relative to *you*.

It sounds as though "FOOM" will always lie about a dozen doublings in the future - for anyone riding the curve. Like the end of the rainbow, it will recede as it is approached.

"For the record, I've been a coder and judged myself a reasonable hacker - set out to design my own programming language at one point, which I say not as a mark of virtue but just to demonstrate that I was in the game. (Gave it up when I realized AI wasn't about programming languages.)"

AI is about programming languages since AI is about computers, and current "AI" languages really aren't that great. I would say that it would be of huge value if someone could design an AI specific language that would be better then Lisp. Also a programming language that better deals with mass parallelism would be of great value to AI. Devoting yourself to that goal would further AI since the problem is one of theory and one of enabling technology.

Just an aside the good hacker in your own view isn't a good metric due to the fact that people always think of themselves as being better at something then they really are.

Phil: It seems clear to me that Newton and Einstein were not universally brilliant relative to ordinary smart people like you in the same sense that ordinary smart people like you are universally brilliant relative to genuinely average people, but it seems equally clear that it was not a coincidence that the same person invented Calculus, Optics AND universal gravitation or general relativity, special relativity, the photoelectric effect, brownian motion etc. Newton and Einstein were obviously great scientists in a sense that very few other people have been. It likewise isn't chance that Tiger Woods or Michael Jordan or Kasparov dominated game after game or that Picasso and Beethoven created many artistic styles.

That said, ELiezer doesn't have any accomplishments that strongly suggest that his abilities at your tasks 1-3 are comparible to the domain specific abilities of the people mentioned above, and in the absense of actual accomplishments of a world-historical magnitude the odds against any one person accomplishing goals of that magnitude seems to be hundreds to one (though uncertainty regarding the difficulty of the goals and the argument itself justify a slightly higher estimate of the probabilities in question). In addition, we don't have strong arguments that tasks 1-3 are related enough to expect solutions to be highly correlated, furthering the argument that building a community is a better idea than trying to be a lone genius.

Google's internal facilities and processes seem to have something of the Ubertool about them. There's a famous quote going around: “Google uses Bayesian filtering the way Microsoft uses the if statement." Certainly they seem closer to taking over the world than anyone else.

PK: One big point that Eliezer is trying to make is that just "hacking away at code" without a much better understanding of intelligence is actually a terrible idea. You just aren't going to get very far. If, by some miracle, you do, the situation's even worse, as you most likely won't end up with a Friendly AI. And an UnFriendly AI could be very very bad news.

GenericThinker: A DSL might help, but you need to understand the domain extremely well to design a good DSL.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31