« Deafening Silence | Main | Money Is Serious »

September 16, 2008

Comments

What did you think about engineered plagues, use of nuclear weapons to induce extreme climate change, and robotic weapons advanced enough to kill off humanity but too limited to carry on civilization themselves?

Carl: None of those would (given our better understanding) be as bad as great plagues that humanity has lived through before.

I noticed engineered plagues after noticing nanotech. Neither nuclear weapons nor automated robotic weapons struck me as probable total extinction events.

Nanotechnology will rather obviously wipe out existing protein-DNA organisms - by replacing them with something much better.

However, ending life or civilization doesn't look at all likely. It wouldn't be permitted by those in charge. The whole "oops-apocalypse" scenario seems implausible to me - our descendants simply won't be so stupid and incompetent as to fumble on that scale.

In the late 1990s I figured roughly even odds of a doomsday catastrophe with nanotech. A mistake with a weapon seems much more likely than a gray-goo accident though. I also think that the risk goes up with the asymmetry of capability in nano; that is the closer to a monopoly on nano that exists, the more likely a doomsday scenario becomes. Multiple strands of development both acts as a deterrent on would be abusers and provides at least some hope of combatting an actual release.

Tim: Eh, you make a big assumption that our descendants will be the ones to play with the dangerous stuff and that they will be more intelligent for some reason. That seems to acknowledge the intelligence / nanotech race condition that is of so much concern to singularitarians.

When I read these stories you tell about your past thoughts, I'm struck by how different your experiences with ideas were. Things you found obvious seem subtle to me. Things you discovered with a feeling of revelation seem pedestrian. Things you dismissed wholesale and now borrow a smidgen of seem like they've always been a significant part of my life.

Take, for example, the subject of this post: technological risks. I never really thought of "technology" as a single thing, to be judged good or bad as a whole, until after I had heard a great deal about particular cases, some good and some bad.

When I did encounter that question, it seemed clear that it was good because the sum total of our technology had greatly improved the life of the average person. It also seemed clear that this did not make every specific technology good.

I don't know about total extinction, but there was a period ending around the time I was born (I think we're about the same age) when people thought that they, their families, and their friends could very well be killed in a nuclear war. I remember someone telling me that he started saving for retirement when the Berlin Wall fell.

With that in mind, I wonder about the influence of our experiences with ideas. If two people agree that technology is good overall but specific technologies can be bad, will they tend apply that idea differently if one was taught it as a child and the other discovered it in a flash of insight as an adult? That might be one reason I tend to agree with the principles you lay out but not the conclusions you reach.

Drexler was worried about just those sorts of problems, so he put off writing up his ideas until he realized how developments in multiple fields were heading in the direction of nanotech without any realistic criticism, that's when he wrote "Engines of Creation". He also made the point that there is no really practical way of preventing the development of molecular nanotech, there are too many reasons for developments leading in that direction. If one nation outlaws it, or too heavily regulates it, it will just be developed elsewhere, maybe even underground, since advancing technology is making it easier and cheaper to do small scale R & D.

Re: you make a big assumption that our descendants will be the ones to play with the dangerous stuff and that they will be more intelligent for some reason.

I doubt it. You are probably misinterpreting what I mean by "our" or "descendants". Future living organisms will be descended from existing ones - that's about all I mean.

Re: That seems to acknowledge the intelligence / nanotech race condition that is of so much concern to singularitarians.

I figure we will have AI before we have much in the way of nanotechnology - if that's what you mean.

Building minds is much easier than building bodies. For one thing, you only need a tiny number of component types for a mind.

However, rather obviously the technologies will feed of each other - mutually accelerating each other's development.

"I noticed engineered plagues after noticing nanotech. Neither nuclear weapons nor automated robotic weapons struck me as probable total extinction events."
What was the probability threshold below which extinction and astronomical waste concerns no longer drew attention?

The whole "oops-apocalypse" scenario seems implausible to me - our descendants simply won't be so stupid and incompetent as to fumble on that scale.

"if you're careless sealing your pressure suit just once, you die." We have come very close to fumbling on that scale already. Petrov Day is next week.

Ah yes, the sunlight reflected off clouds end-of-civilisation scenario. Forgive me for implicitly not giving that more weight.

One disturbing thing about the Petrov issue that I don't think anyone mentioned last time, is that by praising nuclear non-retaliators we could be making future nuclear attacks more likely by undermining MAD.

If groups with MNT have first-strike capability, then you'd expect the winners of WW3 to remain standing at least. I'm not sure how much of a consolation that is.

Several places in the US did have regulations protecting the horse industry from the early automobile industry - I'm not sure what "the present system" refers to as opposed to that sort of thing.

One disturbing thing about the Petrov issue that I don't think anyone mentioned last time, is that by praising nuclear non-retaliators we could be making future nuclear attacks more likely by undermining MAD.

Petrov isn't praised for being a non-retaliator. He's praised for doing good probable inference -- specifically, for recognizing that the detection of only 5 missiles pointed to malfunction, not to a U.S. first strike, and that a "retaliatory" strike would initiate a nuclear war. I'd bet counterfactually that Petrov would have retaliated if the malfunction had caused the spurious detection of a U.S. first strike with the expected hundreds of missiles.

"If you're careless sealing your pressure suit just once, you die" to me seems to imply that proper pressure suit design involves making it very difficult to seal carelessly.

I understand that there are many ways in which nanotechnology could be dangerous, even to the point of posing extinction risks, but I do not understand why these risks seem inevitable. I would find it much more likely that humanity will invent some nanotech device that gets out of hand, poisons a water supply, kills several thousand people, and needs to be contained/quarantined, leading to massive nano-tech development regulation, rather than a nano-tech mistake that immediately depressurizes the whole space suit, is impossible to contain, and kills us all.

A recursively improving, superintelligent AI, on the other hand, seems much more likely to fuck us over, especially if we're convinced it's acting in our best interest for the beginning of its 'life,' and problems only become obvious after it's already become far more 'intelligent' than we are.

Lara Foster, to get what people are worried about, extrapolate the danger of recursive self-improving intelligence to self-reproducing nanotechnology. We want what it can provide, we spread nanomachines, and from there you can calculate how many doublings would be necessary to convert all the molecules on the surface of the planet to nano-assemblers. Ten doublings is 1024*, so we probably would not realize how over-powered we were until far too late.

As you say, this is not the most likely extinction event. Losing Eurasia and Africa to a sign error would be bad thing, but not a full extinction event. The downside of being a nanomachine is that trans-Atlantic swimming is hard with 2nm-long legs.

But if a nano-assembler can reproduce itself in 6 minutes, you have one thousand in an hour, one million the next hour, one billion the next hour... not a lot of time for regulation.

The only one who can react to a problem that big in that timespan is that recursively self-improving AI we have been keeping in the box over there. Guess it's time to let it out. (Say, who is responsible for that first nano-assembler anyway?)

I find rapidly self-replicating manufacturing (probably, but maybe not necessarily, MNT) leading to genocidal conventional and nuclear war on a previously impossible scale, much more likely than any use or accidental outbreak of replicators in the field.

Note that the non-nucleic replicators are already on the loose - they are commonly known as memes.

Nick: Why?

Zubon,
Your model assumes that these 'nano-assemblers' will be able to reproduce themselves using *any* nearby molecules and not some specific kind of molecule/substance. It would seem obviously unwise to invent something that could eat away *any* matter you put near it for the sake of self-reproduction. Why would we ever design such a thing? Even Kurt Vonnegut's hypothetical Ice-Nine could only crystalize water and only at certain temperatures- creating something that essentially crystalizes EVERYTHING does not seem trivial, easy, or advisable to anyone. Maybe you should be clamouring for regulation of who can use nano-scale design technology so mad-men don't do this to deliberately destroy everything. Maybe this should be a top national-security issue. Heck- Maybe it IS a top national security issue and you just don't know it. Changing security opinions still seems safer and easier than initiating a self-recursively improving general AI.

The scenario you propose is, as I understand it, "Grey Goo," and I was under the impression that this was not considered a primary extinction risk (though I could be wrong there).

I find Freitas one of the best writers about the various goos. See this article for example.

Lara Foster, since you agree on the important points, that argument seems resolved. On the materials question, please note the Freitas article cited, particularly that many nanotech plans involve using carbon. As a currently carbon-based lifeform, those molecules are more my concern than *any.*

Re: It would seem obviously unwise to invent something that could eat away *any* matter you put near it for the sake of self-reproduction.

Like a bacterium that could digest *anything*? It would be a neat trick. What happens if it meets another creature of its own type?

Note that the behaviour of replicators is not terribly different from the way AIs tend to slurp up space/time and mass/energy and convert them into utility.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31