« Chaotic Inversion | Main | Singletons Rule OK »

November 29, 2008

Comments

Robin is saying that the chance of a friendliness-requiring event (from a total preference utilitarian point of view such as his own, not from the point of view of relevant decisionmakers) is below 1%, that we should seriously fear self-fulfilling prophecies of such events, and that we should frame things to lower estimates of the likelihood of friendliness-requiring events among our audiences. There are obvious difficulties in taking a speaker's presentation of that conjunction at face value.

The cost-benefit of talking honestly about potential vulnerabilities depends on the ability of different actors to eventually do the analysis themselves. If price-fixing was legal, it would be silly to avoid explaining the mechanics of cartels (when arguing for a legal prohibition of outright price-fixing, perhaps) for fear of giving ideas to large corporations. Institutions with abundant resources and access to a chance to vastly improve their position through fairly obvious schemes like price-fixing can generally be expected to figure it out themselves, whereas consumers, voters, and regulators will have much weaker incentives to identify the possibility.

On the other hand, FDR was convinced of the high importance of nuclear weapons by a concerted effort by forward-thinking and high-status academics, and other countries either were less focused on their programs or were strongly influenced by espionage reports from the Manhattan Project. The National Nanotechnology Initiative also comes to mind.

Carl, I can both think a chance is low and warn against increasing that chance. Sure people can do an analysis later, but their cost-benefit estimates should depend on their estimates of others' behavior.

Full scale nanotechnology might give an aggressor nation the capacity to instigate a decapitating first strike attack without itself suffering significant losses. The potential of such technology might lead to a total tech war.

This technology should be govern by international laws, perhaps international commerce and technology law. If a conflict will arise, it is possible that not only countries will battle in this tech war. But also company by company in a certain country.

"Carl, I can both think a chance is low and warn against increasing that chance"

Of course you can, but you just presented a rationale that seems to justify (given your stated values) erring on the side of presenting inaccurately low estimates. This is so even in light of the later statement:

"Yes, we must be open to evidence that other powerful groups will treat new techs as total wars."

Wouldn't the claims in this post, combined with your preference utilitarianism, justify listening to new evidence but provisionally claiming that it was weak when presenting probability estimates? I don't think this is very likely in your case, but it's a hazard that arises around the discussion of many types of 'dangerous knowledge.'

James, some techs, like guns, can be more useful in war than in peace, and give a first-strike advantage, and so reasonably lead to more fears of war than would a generic tech. But brain emulations do not seem to be such a tech.

Carl, yes sometimes lies can give advantages, and this might be such a time, but I am not lying.

I'll accept that.

I generally refer to this scenario as "winner take all" and had planned a future post with that title.

I'd never have dreamed of calling it a "total tech war" because that sounds much too combative, a phrase that might spark violence even in the near term. It also doesn't sound accurate, because a winner-take-all scenario doesn't imply destructive combat or any sort of military conflict.

I moreover defy you to look over my writings, and find any case where I ever used a phrase as inflammatory as "total tech war".

I think, that in this conversation and in the debate as you have just now framed it, "tu quoque!" is actually justified here.

Anyway - as best as I can tell, the natural landscape of these technologies, which introduces disruptions much larger than farming or the Internet, is without special effort winner-take-all. It's not a question of ending up in that scenario by making special errors. We're just there. Getting out of it would imply special difficulty, not getting into it, and I'm not sure that's possible. -- such would be the stance I would try to support.

Also:

If you try to look at it from my perspective, then you can see that I've gone to tremendous lengths to defuse both the reality and the appearance of conflict between altruistic humans over which AI should be built. "Coherent Extrapolated Volition" is extremely meta; if all competent and altruistic Friendly AI projects think this meta, they are far more likely to find themselves able to cooperate than if one project says "Libertarianism!" and another says "Social democracy!"

On the other hand, the AGI projects run by the meddling dabblers do just say "Libertarianism!" or "Social democracy!" or whatever strikes their founder's fancy. And so far as I can tell, as a matter of simple fact, an AI project run at that level of competence will destroy the world. (It wouldn't be a good idea even if it worked as intended, but that's a separate issue.)

As a matter of simple decision theory, it seems to me that an unFriendly AI which has just acquired a decisive first-mover advantage is faced with the following payoff matrix:

Share Tech, Trade: 10 utilons
Take Over Universe: 1000 utilons

As a matter of simple decision theory, I expect an unFriendly AI to take the second option.

Do you agree that if an unFriendly AI gets nanotech and no one else has nanotech, it will take over the world rather than trade with it?

Or is this statement something that is true but forbidden to speak?

We could be in any of the three following domains:

1) The tech landscape is naturally smooth enough that, even if participants don't share technology, there is no winner-take-all.

2) The tech landscape is somewhat steep. If participants don't share technology, one participant will pull ahead and dominate all others via compound interest. If they share technology, the foremost participant will only control a small fraction of the progress and will not be able to dominate all other participants.

3) The tech landscape contains upward cliffs, and/or progress is naturally hard to share. Even if participants make efforts to trade progress up to time T, one participant will, after making an additional discovery at time T+1, be faced with at least the option of taking over the world. Or it is plausible for a single participant to withdraw from the trade compact and either (a) accumulate private advantages while monitoring open progress or (b) do its own research, and still take over the world.

(2) is the only regime where you can have self-fulfilling prophecies. I think nanotech is probably in (2) but contend that AI lies naturally in (3).

Share Tech, Trade: 10 utilons
Take Over Universe: 1000 utilons

But you would ultimately get less through taking over the world than trading with it. Temporarily, you would get more, as you simply grab everything in sight, but long term you would get only the output of a bunch of slaves, vs. the higher output of free men.

And surely an AI, who would potentially live forever, would think long term, and not create such a table. But then I guess a dabbler-designed one might not.

"And so far as I can tell, as a matter of simple fact, an AI project run at that level of competence will destroy the world."

How is that even logical? Unless you plan to build an AI and immediately hook it up to all your nuclear weapons or give it free run of any and all factories for making weapons or nanotech (whatever that is nano being applied to all manner of things and now being essentially a buzz word) there is no reason to assume friendly or not an AI would destroy the world. That entire scenario is fictional, one can build an AI in an isolated computer system with little to no danger. Sure its true that one can destroy the world through AI but an unfriendly AI does not necessitate that conclusion that the world will be destroyed. The issue here is there seems to be a lack of understanding of what is real in the field of AI and what is purely fictional.

If we are total honest, we are so far away from AGI at the moment that debating friendliness at this point is like debating nuclear war before the discovery of hard radiation. As long as one keeps a human in the loop with the AI and properly designs the hardware and what the hardware is connected to one can make a safe trip into "mind design space".

A final note there is this continued talk of 100's or 1000's of times human speed what does that even mean? What would thinking at a thousand times our current speed mean? How would that even work? Our thoughts and senses etc are tied closely to our sense of time. If you speed that up why would one expect that would be better? Seems to me that this would merely be like listening to music on fast-forward.

I have made a post on the differences between conflicts among humans and among optimizers with utility linear with resources at my blog.

Thanks, Carl.

To increase your chances of survival you want to be have as little compute power as you can get away with. So that you don't waste resources on things that don't intrinsically get you more resources. Look at plants, fantastically successful, but no use for a brain and they would be worse off with one.

Greater computational ability doesn't automatically lead to winning. Only if there is something worth discovering or predicting with the intelligence, another energy source or a cheaper way of doing something you need to do to survive, does it make sense to invest in compute power.

Carl, it's fantastic that you're finally blogging. You should link to it in your posting name, like the rest of us.

There may be, as Robin notes, a bias towards seeing new tech through the lens of total war. There is also a bias for optimism, based off mellenia of shot messengers and a bias to see the future through the lens of the present.

I see in this post a warning against mentioning potentially disasterous consequences to AI. This scares me. When even rationalists ought to feel ashamed of considering such possibilities then we are in trouble. Life is a powerful, dangerous thing. When we are considering creating new, potentially more powerful forms of life we need to consider how to do so without rendering ourselves extinct. The challenge is not to prove that we will not end up with a stable economically competitive non-combative equilibrium. The challenge is to look for every possibility that could lead to total war and make damn sure we've considered them before we touch the on switch.

Carl's 'Reflective Disequilibria' post is a good one. We cannot assume conflict with or between AIs will follow the trend of human wars. The competitive superiority of 'free men' makes for some damn good movies and has a strong impact on human history. However, that is an ideosyncracy that a third generation em will probably not possess.

Will underestimates the competitive potential of the Trifid.

A winner-take-all scenario doesn't imply destructive combat or any sort of military conflict.

Just so. For example, in biology, genes can come to dominate completely by differential reproductive success - not just by killing all your competitors. Warfare is different from winning.

Eliezer, if everything is at stake then "winner take all" is "total war"; it doesn't really matter if they shoot your or just starve you to death. The whole point of this post is to note that anything can be seen as "winner-take-all" just by expecting others to see it that way. So if you want to say that a particular tech is more winner take all than usual, you need an argument based on more than just this effect. And if you want to argument it is far more so than any other tech humans have ever seen, you need a damn good additional argument. It is possible that you could make such an argument work based on the "tech landscape" considerations you mention, but I haven't seen that yet. So consider this post to be yet another reminder that I await hearing your core argument; until then I set the stage with posts like this.

To answer your direct questions, I am not suggesting forbidding speaking of anything, and if "unfriendly AI" is defined as an AI who sees itself in a total war, then sure it would take a total war strategy of fighting not trading. But you haven't actually defined "unfriendly" yet.

Carl, I replied at your blog post, but will repeat my point here this time. You say total war doesn't happen now because leaders are "comfortable" and humans are risk-averse with complex preferences, but there would be a total war over the solar system later because evolved machines preferences would be linear in raw materials. But evolutionary arguments don't say we evolve to only care about raw materials, and they only suggest risk-neutrality with respect to fluctuations that are largely independent across copies that share "genes." With respect to correlated fluctuations evolutionary arguments suggest risk-averse log utility.

@GenericThinker

Unless you plan to build an AI and immediately hook it up to all your nuclear weapons or give it free run of any and all factories for making weapons or nanotech (whatever that is nano being applied to all manner of things and now being essentially a buzz word) there is no reason to assume friendly or not an AI would destroy the world. That entire scenario is fictional, one can build an AI in an isolated computer system with little to no danger.
. . .
As long as one keeps a human in the loop with the AI and properly designs the hardware and what the hardware is connected to one can make a safe trip into "mind design space".

Eliezer has made a very persuasive case that a "human in the loop" would not be adequate to contain a super-intelligent AI. See also his post That Alien Message. This post also addresses the point you raise when you write, "A final note there is this continued talk of 100's or 1000's of times human speed what does that even mean? What would thinking at a thousand times our current speed mean? How would that even work? Our thoughts and senses etc are tied closely to our sense of time. If you speed that up why would one expect that would be better? Seems to me that this would merely be like listening to music on fast-forward."

Neither of these addresses the question of whether an artificial super-intelligence is likely to be built in the near future. But they might make you reconsider your expectation that it could be easily controlled if it was built.

Ian:
What benefits could slaves or free men provide an AI (or a group of EMs) that it could do itself 100 times better with nanobots and new processor cores. Foxes do not enslave rabbits.
Even if it were just deciding to enslave us or let us be 'free', it would know that it could always free us later. (us or future generations)

General Thinker:
A simulated human can be put in a simulated environment that is sped up right along with them. They do a year's worth of cognitive work, and play, and sleep, in hours. They experience the full year.

if you want to say that a particular tech is more winner take all than usual, you need an argument based on more than just this effect. And if you want to argument it is far more so than any other tech humans have ever seen, you need a damn good additional argument.

IT is the bleeding edge of technology - and is more effective than most tech at creating inequalities - e.g. look at the list of top billionaires.

Machine intelligence is at the bleeding edge of IT. It is IT's "killer application". Whether its inventors will exploit its potential to provide wealth will be a matter of historical contingency - but the potential certainly looks as though it will be there. In particular, it looks as though it is likely to be mostly a server-side technology - and those are the easiest for the owners to hang on to - by preventing others from reverse-engineering the technology.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31