« Beliefs Require Reasons, or: Is the Pope Catholic? Should he be? | Main | Modern Depressions »

November 27, 2008

Comments

There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.

Only if you completely ignore The Colossus.

"By the end of the war, 10 of the computers had been built for the British War Department, and they played an extremely significant role in the defeat of Nazi Germany, by virtually eliminating the ability of German Admiral Durnetz to sink American convoys, by undermining German General Irwin Rommel in Northern Africa, and by confusing the Nazis about exactly where the American Invasion at Normandy France, was actually going to take place."

I.E. 10 computers rendered the entire German navy essentially worthless. I'd call that a 'supreme advantage' in naval military terms.

http://www.acsa2000.net/a_computer_saved_the_world.htm

"The Colossus played a crucial role in D-Day. By understanding where the Germans had the bulk of their troops, the Allies could decide which beaches to storm and what misinformation to spread to keep the landings a surprise."

http://kessler.blogs.nytimes.com/tag/eniac/

Sure, it didn't blow people up into little bits like an atomic bomb, but who cares? It stopped OUR guys getting blown up into little bits, and also devastated the opposing side's military intelligence and command/control worldwide. It's rather difficult to measure the lives that weren't killed, and the starvation and undersupply that didn't happen.

Arguably, algorithmic approaches had a war-winning level of influence even earlier:

http://en.wikipedia.org/wiki/Zimmermann_Telegram.

Anonymous.

So nanotechnology can plausibly automate away much of the manufacturing in its material supply chain. If you already have nanotech, you may not need to consult the outside economy for inputs of energy or raw material.

Why would you not make use of resources from the outside world?

IMO, the issue in this area is with folks like Google - who take from the rest of the world, but don't contribute everything they build back again - so they develop their own self-improving ecosystem that those outside the company have no access to. Do that faster than your competitors in a suitably-diverse range of fields and eventually you find yourself getting further and further ahead - at least until the monopolies commission takes notice.

You might already have addressed this, but it seems to me that you have an underlying assumption that potential intelligence/optimization power is unbounded. Given what we currently know of the rules of the universe: the speed of light, the second law of thermodynamics, Amdahl's law etc., this does not seem at all obvious to me.

Of course, the true upper limit might be much higher than current human intelligence But if there exists any upper bound, it should influence the "FOOM"-scenario. Then 30 minutes head start would only mean arriving at the upper bound 30 minutes earlier.

How much of current R&D time is humans thinking, and how much is compiling projects, running computer simulations or doing physical experiments?

E.g. would having faster than human speed uploads, speed up getting results from the LHC by the ratio of their speed to us?

Also do you have some FLOPS per cubic centimeter estimations for nanocomputers? I looked at this briefly, and I couldn't find anything. It references a previous page that I can't find.

Why is Eliezer assuming that sustainable cycles of self-improvement are necessary in order to build an UberTool that will take over most industries? The Japanese Fifth Generation Computing Project was a credible attempt to build such an UberTool, but it did not much rely on recursive self-improvement (apart from such things as using current computer systems to design next-generation electronics). Contrary to common misconceptions, it did not even rely on _human level_ AI, let alone superhuman intelligence.

If this was a credible project (check the contemporary literature and you'll find extensive discussions about its political implications and the like) why not Douglas Engelbart's set of tools?

Well at long last you finally seem to be laying out the heart of your argument. Dare I hope that we can conclude our discussion by focusing on these issues, or are there yet more layers to this onion?

Will

"Also do you have some FLOPS per cubic centimeter estimations for nanocomputers? I looked at this briefly, and I couldn't find anything. It references a previous page that I can't find."

FLOPs are not a good measure of computing performance since Floating Point Calculations are only one small aspect of what computers have to do. Further the term nanocomputers as used is misleading since all of todays processors could be classified as nanocomputers the current ones using the 45nm process moving to the 32nm process.

Eliezer

"Just to make it clear why we might worry about this for nanotech, rather than say car manufacturing - if you can build things from atoms, then the environment contains an unlimited supply of perfectly machined spare parts. If your moleculary factory can build solar cells, it can acquire energy as well."

Ignoring the other obvious issues in your post, this is of course not true. One cannot just bond any atom to any atom this is well known and have something useful. I would also like to point out that everyone tosses around the term nano including the Foresight institute but the label has been so abused through projects that don't deserve it that it seems a bit meaningless.

The other issue is of course this concept that we will build everything from atoms in the future that you seem to imply. This is of course silly since building a 747 from atoms up is much harder then just doing it the way we do it now. Nano engineering has to be applied to the right aspects to make it useful.

"I don't think they've improved our own thinking processes even so much as the Scientific Revolution - yet. But some of the ways that computers are used to improve computers, verge on being repeatable (cyclic)."

This is not true either, current computers are designed by the previous generation. If we look at how things are done on the current processors and how they were done we see large improvements. The computing industry has made huge leaps forward since the early days.

Finally I have trouble with the assumption that once we have advanced nanotech whatever that means that we will all of a sudden have access to tremendously more computing power. Nanotech as such will not do this, regardless of whether we ever have molecular manufacturing we will have 16nm processors in a few years. Computing power should continue to follow Moore's law till processor components are measured in angstroms. This being the case the computer power to run the average estimates of the human brains computational power already exist. The IBM Roadrunner system is one example. The current issue is the software there is no end to possible hardware improvement but unless software matches who cares.

By nanocomputer I meant rod-logic or whatever the state of the art in hypothetical computing is. I want to see how it compares to standard computing.

I think the lure of nano computing is supposed to be low power consumption and easy 3d stackability that that entails as well. It it not sufficient to have small components if they are in a 2D design and you can't have too many together without overheating.

Some numbers would be nice though.

There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.

Only if you ignore Colossus, the computer whose impact on the war was so great that in the UK, they destroyed it afterwards rather than risk it falling into enemy hands.

"By the end of the war, 10 of the computers had been built for the British War Department, and they played an extremely significant role in the defeat of Nazi Germany, by virtually eliminating the ability of German Admiral Durnetz to sink American convoys, by undermining German General Irwin Rommel in Northern Africa, and by confusing the Nazis about exactly where the American Invasion at Normandy France, was actually going to take place."

I.E. 10 computers rendered the German navy essentially worthless. I'd call that a 'supreme advantage' in naval military terms.

http://www.acsa2000.net/a_computer_saved_the_world.htm

"The Colossus played a crucial role in D-Day. By understanding where the Germans had the bulk of their troops, the Allies could decide which beaches to storm and what misinformation to spread to keep the landings a surprise."

http://kessler.blogs.nytimes.com/tag/eniac

Sure, it didn't blow people up into little bits like an atomic bomb, but who cares? It stopped OUR guys getting blown up into little bits, and also devastated the opposing side's military intelligence and command/control worldwide. It's rather difficult to measure the lives that weren't killed, and the starvation and undersupply that didn't happen.

Arguably, algorithmic approaches had a war-winning level of influence even earlier:

http://en.wikipedia.org/wiki/Zimmermann_Telegram

Anonymous.

An interesting modern analogy is the invention of the CDO in finance.

Its development lead to a complete change of the rules of the game.

If you had asked a bank manager 100 years ago to envisage ultimate consequences assuming the availability of a formula/spreadsheet for splitting up losses over a group of financial assets, so there was a 'risky' tier and a 'safe' tier, etc., I doubt they would have said 'The end of the American Financial Empire'.

Nonetheless it happened. The ability to sell tranches of debt at arbitary risk levels lead to the banks lending more. That led to mortgages becoming more easily available. That lead to dedicated agents making commission from the sheer volume of lending that became possible. That lead to reduction of lending standards, more agents, more lending. That lead to higher profits which had to be maintained to keep shareholders happy. That lead to increased use of CDOs, more agents, more lending, lower standards... a housing boom... which lead to more lending... which lead to excessive spending... which has left the US over-borrowed and talking about the second great depression.

etc.

It's not quite the FOOM Eliezer talks about, but it's a useful example of the laws of unintended consequences.

Anonymous.

Robin: Well at long last you finally seem to be laying out the heart of your argument. Dare I hope that we can conclude our discussion by focusing on these issues, or are there yet more layers to this onion?

It takes two people to make a disagreement; I don't know what the heart of my argument is from your perspective!

This essay treats the simpler and less worrisome case of nanotech. Quickie preview of AI:

When you upgrade to AI there are harder faster cascades because the development idiom is even more recursive, and there is an overhang of hardware capability we don't understand how to use;

There are probably larger development gaps between projects due to a larger role for insights;

There are more barriers to trade between AIs, because of the differences of cognitive architecture - different AGI projects have far less in common today than nanotech projects, and there is very little sharing of cognitive content even in ordinary AI;

Even if AIs trade improvements among themselves, there's a huge barrier to applying those improvements to human brains, uncrossable short of very advanced technology for uploading and extreme upgrading;

So even if many unFriendly AI projects are developmentally synchronized and mutually trading, they may come to their own compromise, do a synchronized takeoff, and eat the biosphere; without caring for humanity, humane values, or any sort of existence for themselves that we regard as worthwhile...

But I don't know if you regard any of that as the important part of the argument, or if the key issue in our disagreement happens to be already displayed here. If it's here, we should resolve it here, because nanotech is much easier to understand.

"In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."

indeed.

There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.

Did atomic bombs give the US "a supreme military advantage" at the end of WW2?

If Japan had got the bomb in late 1945 instead of the US, could it have conquered the world? Or Panama, if it were the sole nuclear power in 1945?

If not, then did possession of the bomb give "a supreme military advantage"?

"In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."

If you believe this you should be in favor of the slowing down of AI research and the speeding up of work on enhancing human intelligence. The smarter we are the more likely we are to figure out friendly AI before we have true AI.

Also, if you really believe this shouldn't you want the CIA to start assassinating AI programmers?

I can accelerate the working-out of FAI theory by applying my own efforts and by recruiting others. Messing with macro tech developmental forces to slow other people down doesn't seem to me to be something readily subject to my own decision.

I don't trust that human intelligence enhancement can beat AI of either sort into play - it seems to me to be running far behind at the moment. So I'm not willing to slow down and wait for it.

Regarding the CIA thing, I have ethics.

It's worth noting that even if you consider, say, gentle persuasion, in a right-tail problem, eliminating 90% of the researchers doesn't get you 10 times as much time, just one standard deviation's worth of time or whatever.

The sort of theory that goes into hacking up an unFriendly AI and the sort of theory that goes into Friendly AI are pretty distinct as subjects.

In your one upload team a day ahead scenario, by "full-scale nanotech" you apparently mean oriented around very local production. That is, they don't suffer much efficiency reduction by building everything themselves on-site via completely automated production. The overall efficiency of this tech with available cheap feedstocks allows a doubling time of much less than one day. And in much less than a day this tech plus feedstocks cheaply available to this one team allow it to create more upload-equivalents (scaled by speedups) than all the other teams put together. Do I understand you right?

As I understand nanocomputers, it shouldn't really take all that much nanocomputer material to run more uploads than a bunch of bios - like, a cubic meter of nanocomputers total, and a megawatt of electricity, or something like that. The key point is that you have such-and-such amount of nanocomputers available - it's not a focus on material production per se.

Also, bear in mind that I already acknowledged that you could have a slow runup to uploading such that there's no hardware overhang when the very first uploads capable of doing their own research are developed - the one-day lead and the fifty-minute lead are two different scenarios above.

EY: "In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."

I'm not convinced that any realistic amount of computing power will let you "brute force" AI. If you've written a plausibility argument for this, then do link me...

> Of course, the true upper limit might be much higher than current human intelligence But if there exists any upper bound, it should influence the "FOOM"-scenario. Then 30 minutes head start would only mean arriving at the upper bound 30 minutes earlier.

Rasmus Faber: plausible upper limits for the ability of intelligent beings include such things as destroying galaxies and creating private universes.

What stops an Ultimate Intelligence from simply turning the Earth (and each competitor) into a black hole in those 30 minutes of nigh-omnipotence? Even a very weak intelligence could do things like just analyze the OS being used by the rival researchers and break in. Did they keep no backups? Oops; game over, man, game over. Did they keep backups? Great, but now the intelligence has just bought itself a good fraction of an hour (it just takes time to transfer large amounts of data). Maybe even more, depending on how untried and manual their backup system is. And so on.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31