« Voting Kills | Main | Test Near, Apply Far »

December 02, 2008

Comments

"I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less."

I am glad I can agree for once :)

"The main thing I'll venture into actually expecting from adding "insight" to the mix, is that there'll be a discontinuity at the point where the AI understands how to do AI theory, the same way that human researchers try to do AI theory. An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code;"

Anyway, my problem with your speculation about hard takeoff is that you seem to do the same conceptual mistake that you so dislike about Cyc - you seem to thing that AI will be mostly "written in the code".

I suspect it is very likely that the true working AI code will be relatively small and already pretty well optimized. The "mind" itself will be created from it by some self-learning process (my favorite scenario involves weak AI as initial "tutor") and in fact will be mostly consist of vast amount of classification coeficients and connections or something like that (think bayesian or neural networks).

While it probably will be in AI power to optimize its "primal algorithm", gains there will be limited (it will be pretty well optimized by humans anyway). The ability to reorganize its "thinking network" might be severely low. Same as with human - we nearly understand how single neuron work, but are far from understanding the whole network. Also, with any further possible self-improvement, the complexity grows further and it is quite reasonable to predict this complexity will grow faster than AI ability to understand it.

I think it all boils down to very simple showstopper - considering you are building perfect simulation, how many atoms you need to simulate a atom? (BTW, this is also showstopper for "nested virtual reality" idea).

Note however that this whole argument is not really mutually exclusive with hard takeoff. AI still can build next generation AI that is better. But "self" part might not work. (BTW, interesting part is that "parent" AI might then face the same dilemma with descendant's friendliness ;)

I also thing that in all your "foom" posts, you understimate empirical form of knowledge. It sounds like you expect AI to just sit down in the cellar and think, without much inputs and actions, then invent the theory of everything and take over the world.

That is not going to happen at least for the same reason why the endless chain of nested VRs is unlikely.

"All these complications is why I don't believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights - and the "fold the curve in on itself" paradigm of recursion is going to amplify even small roughnesses in the trajectory."

Wouldn't that be a reason to say, "I don't know what will happen"? And to disallow you from saying, "An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely"?

If you can't make quantitative predictions, then you can't say that the foom might take an hour or a day, but not six months.

A lower-bound (of the growth curve) analysis could be sufficient to argue the inevitability of foom.

I agree there's a time coming when things will happen too fast for humans. But "hard takeoff", to me, means foom without warning. If the foom doesn't occur until the AI is smart enough to rewrite an AI textbook, that might give us years or decades of warning. If humans add and improve different cognitive skills to the AI one-by-one, that will start a more gently-sloping RSI.

"So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does beg the question of what those changes were."

Get it right!

http://begthequestion.info/

"But the much-vaunted "massive parallelism" of the human brain, is, I suspect, mostly cache lookups to make up for the sheer awkwardness of the brain's serial slowness - if your computer ran at 200Hz, you'd have to resort to all sorts of absurdly massive parallelism to get anything done in realtime. I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less."

That is just patently false, the brain is massively parallel and the parallelism is not cache look-ups it would be more like current GPUs. The computational estimate does not take into account for why the brain has as much computational power as it does ~10^15 or more. When you talk about relative speed what you have to remember is that we are tied to our perception of time which is roughly between 30-60FPS. Having speeds beyond 200Hz isn't necessary since the brain doesn't have RAM or caches like a traditional computer to store solutions in advance in the same way. By having the speed at 200Hz the brain can run fast enough to give us real-time perceptions while having time to do multi-step operations. A nice thing would be if we could think about multiple things in parallel the way a computer with multiple processors can focus on more then one application at the same time.

I think all these discussions of the brains speed are fundamentally misguided, and show lack of understanding of current neuroscience computational or otherwise. Since to say run the brain at 2Ghz what would that mean? How would that work with our sensory systems? If you only have one processing element with only 6-12 functional units then 2Ghz is nice if you have billions of little processors and your senses all run around 30-60FPS then 200Hz is just fine without being overkill unless your algorithms require more then 100 serial steps. My guess would be that brain does a form of parallel algorithms to process information to limit that possibility.

On the issue of mental processing power look at savants, some of them can count in primes all day long or can recite a million digits of pi. For some reason the disfunction in their brains allows them to tap into all sorts of computational power. The big issue with the brain is that we cannot focus on multiple things and the way in which we perform for example math is not nearly as streamlined as a computer. For may own part I am at my limit multiplying a 3 digit number by a 3 digit number in my head. This is of course a function of many things but it is in part a function of the limitations of short term memory and the way in which our brains allow us to do math.

luzr: You're currently using a program which can access the internet. Why do you think an AI would be unable to do the same? Also, computer hardware exists for manipulating objects and acquiring sensory data. Furthermore: by hypothesis, the AI can improve itself better then we can, because, as EY pointed out, we're not exactly cut out for programming. Also, improving an algorithm does not necessarily increase its complexity. And you don't have to simulate reality perfectly to understand it, so there is no showstopper there. Total simulation is what we do when we don't have anything better.

What could an AI do, yet still be unable to self-optimize? Quite a bit it turns out: everything that a modern human can do as a minimum and possibly a great deal more since *we* have yet to demonstrate that we can engineer intelligence. (I admit here that it may be college-level material once discovered)

If we define the singularity as the wall beyond which is unpredictable, I think we can have an effective singularity without FOOM. This follows from admitting that we can have computers that are superior to us in every way, without even achieving recursive modification. These machines then have all the attendant advantages of limitless hardware, replicability, perfect and expansive memory, deep serial computation, rationality by design, limitless external sensors, etc.

*if* it is useless to predict past the singularity, and *if* foom is unlikely to occur prior to the singularity, does this make the pursuit of friendliness irrelevant? Do we have to postulate foom = singularity in order to justify friendliness?

"So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does beg the question of what those changes were."

Perhaps the final cog was language. The original innovation is concepts: the ability to process thousands of entities at once by forming a class. Major efficiency boost. But chimps can form basic concepts and they didn't go foom.

Because forming a concept is not good enough - you have to be able to do something useful with it, to process it. Chimps got stuck there, but we passed abstractions through our existing concrete-only processing circuits by using a concrete proxy (a word).

Phil: It seems to me that the above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off, but if take-off took six months I still wouldn't expect that humans would be able to react. The AGI would probably be able to remain hidden until it was in a position to create a singleton extremely suddenly.

Aron: It's rational to plan for the most dangerous survivable situations.
However, it doesn't really make sense to claim that we can build computers that are superior to ourselves but that they can't improve themselves, since making them superior to us blatantly involves improving them. That said, yes it is possible that some other path to the singularity could produce transhuman minds that can't quickly self-improve and which we can't quickly improve, for instance drug enhanced humans, in which case hopefully those transhumans would share our values well enough that they could solve Friendlyness for us.

AC, "raise the question" isn't strong enough. But I am sympathetic to this plea to preserve technical language, even if it's a lost cause; so I changed it to "demand the question". Does anyone have a better substitute phrase?

All these complications is why I don't believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff.
Phil: Wouldn't that be a reason to say, "I don't know what will happen"? And to disallow you from saying, "An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely"?

These are different problems, akin to "predict exactly where Apophis will go" and "estimate the size of the keyhole it has to pass through in order to hit Earth". Or "predict exactly what this poorly designed AI will end up with as its utility function after it goes FOOM" versus "predict that it won't hit the Friendliness keyhole".

A secret of a lot of the futurism I'm willing to try and put any weight on, is that it involves the startling, amazing, counterintuitive prediction that something ends up in the not-human space instead of the human space - humans think their keyholes are the whole universe, because it's all they have experience with. So if you say, "It's in the (much larger) not-human space" it sounds like an amazing futuristic prediction and people will be shocked, and try to dispute it. But livable temperatures are rare in the universe - most of it's either much colder or much hotter. A place like Earth is an anomaly, though it's the only place beings like us can live; the interior of a star is much denser than the materials of the world we know, and the rest of the universe is much closer to vacuum.

So really, the whole hard takeoff analysis of "flatline or FOOM" just ends up saying, "the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. When you look at it that way, it's not so radical a prediction, is it?

anon:

"You're currently using a program which can access the internet. Why do you think an AI would be unable to do the same?"

I hope it will. Still, that would get it only to *preexisting* knowledge.

It can draw many hypothesis, but it will have to TEST them (gain empirical knowledge). Think LHC.

BTW, not that there are problems in quantum physics that do not have analytical solution. Some equations simply cannot be solved. Now of course, perhaps superintelligence will find how to do that, but I believe there are quite solid mathematic proofs that it is not possible.

[quote]
Also, computer hardware exists for manipulating objects and acquiring sensory data. Furthermore: by hypothesis, the AI can improve itself better then we can, because, as EY pointed out, we're not exactly cut out for programming. Also, improving an algorithm does not necessarily increase its complexity.
[/quote]

I am afraid that you have missed the part about algorithm being essential, but not the core of AI mind. The mind can as well be data. And it can be unoptimizable, for the same reasons some of equations cannot be analytically solved.

[quote]
And you don't have to simulate reality perfectly to understand it, so there is no showstopper there.
[/quote]

To understand certain aspects of reality. All I am saying is that to understand certain aspects might not be enough.

What I suggest is that the "mind" might be something as network of interconnected numerical values. For the outside observer, there will be no order in connections or values. To truly understand the "mind" a poorly as by simulation, you would need much bigger mind, as you would have to simulate and carefully examine each of nodes.

Crude simulation does not help here, because you do not know which aspects to look for. Anything can be important.

The above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off

We've been using artificial intelligence for over 50 years now. If you haven't start the clock already, why not? What exactly are you waiting for? There is never going to be a point in the future where machine intelligence "suddenly" arises. Machine intelligence is better than human intelligence in many domains today. Augmented, cultured humans are a good deal smarter than unmodified ones today. Eschew machines, and see what kind of paid job you get with no phone, computer, or internet if you want to see that for yourself.

"So I stick to qualitative predictions. "AI go FOOM"."

Even if it is wrong - I think it is correct - it is the most important thing to consider.

I have a saying/hypothesis that a human trying to write code is like someone without a visual cortex trying to paint a picture - we can do it eventually, but we have to go pixel by pixel because we lack a sensory modality for that medium; it's not our native environment.

Eliezer, this sounds wrong to me. Acquired skills matter more than having a sensory modality. Computers are quite good at painting, e.g. see the game Crysis. Painting with a brush isn't much easier than pixel by pixel, and it's not a natural skill. Neither is the artist's eye for colour and shape, or the analytical ear for music (do you know the harmonies of your favourite tunes?) You can instantly like or dislike a computer program, same as a painting or a piece of music: the inscrutable inner workings get revealed in the interface.

because our computing hardware has run so far ahead of AI theory, we have incredibly fast computers we don't know how to use for thinking; getting AI right could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.

Now there's a scary thought.

Eliezer: So really, the whole hard takeoff analysis of "flatline or FOOM" just ends up saying, "the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM.
But the AI is tied up with the human timescale at the start. All of the work on improving the AI, possibly for many years, until it reaches very high intelligence, will be done by humans. And even after, it will still be tied up with the human economy for a time, relying on humans to build parts for it, etc. Remember that I'm only questioning the trajectory for the first year or decade.

(BTW, the term "trajectory" implies that only the state of the entity at the top of the heap matters. One of the human race's backup plans should be to look for a niche in the rest of the heap. But I've already said my piece on that in earlier comments.)

Thomas: Even if it is wrong - I think it is correct - it is the most important thing to consider.
I think most of us agree it's possible. I'm only arguing that other possibilities should also be considered. It would be unwise to adopt a strategy that has a 1% chance of making 90%-chance situation A survivable, if that strategy will make the otherwise-survivable 10%-chance situation B deadly.

>>> Computers are quite good at painting, e.g. see the game Crysis.

They do that using dedicated hardware. Try to paint Crysis in realtime 'per pixel', using a vanilla CPU.

>>> They do that using dedicated hardware. Try to paint Crysis in realtime 'per pixel', using a vanilla CPU.

Interestingly, today's high-end vanilla CPU (quadcore at 3Ghz) would paint 7-8 years old games just fine. Means in another 8 years, we will be capable of doing Crysis without GPU.

Crysis is a great game! http://smokinfiles.com

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31