« Entrepreneurs Are Not Overconfident | Main | Tyler on Cryonics »

December 15, 2008

Comments

Don't bogart that joint, my friend.

The one ring of power sits before us on a pedestal; around it stand a dozen folks of all races. I believe that whoever grabs the ring first becomes invincible, all powerful. If I believe we cannot make a deal, that someone is about to grab it, then I have to ask myself whether I would weld such power better than whoever I guess will grab it if I do not. If I think I'd do a better job, yes, I grab it. And I'd accept that others might consider that an act of war against them; thinking that way they may well kill me before I get to the ring.

With the ring, the first thing I do then is think very very carefully about what to do next. Most likely the first task is who to get advice from. And then I listen to that advice.

Yes this is a very dramatic story, one which we are therefore biased to overestimate its likelihood.

I don't recall where exactly, but I'm pretty sure I've already admitted that I'd "grab the ring" before on this blog in the last month.

I'm not asking you if you'll take the Ring, I'm asking what you'll do with the Ring. It's already been handed to you.

Take advice? That's still something of an evasion. What advice would you offer you? You don't seem quite satisfied with what (you think is) my plan for the Ring - so you must already have an opinion of your own - what would you change?

Eliezer, I haven't meant to express any dissatisfaction with your plans to use a ring of power. And I agree that someone should be working on such plans even if the chances of it happening are rather small. So I approve of your working on such plans. My objection is only that if enough people overestimate the chance of such scenario, it will divert too much attention from other important scenarios. I similarly think global warming is real, worthy of real attention, but that it diverts too much attention from other future issues.

This is a great device for illustrating how devilishly hard it is to do anything constructive with such overwhelming power, yet not be seen as taking over the world. If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet , and those who valued those philosophies will curse you. If you implement any single utopian vision everyone who wanted a different one will hate you, and if you limit yourself to any minimal level of intervention everyone who wants larger benefits than you provide will be unhappy.

Really, I doubt that there is any course you can follow that won’t draw the ire of a large minority of humanity, because too many of us are emotionally committed to inflicting various conflicting forms of coercion on each other.

If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?

One of these things is much like the other...

Infinity screws up a whole lot of this essay. Large-but-finite is way way harder, as all the "excuses", as you call them, become real choices again. You have to figure out whether to monitor for potential conflicts, including whether to allow others to take whatever path you took to such power. Necessity is back in the game.

I suspect I'd seriously consider just tiling the universe with happy faces (very complex ones, but still probably not what the rest of y'all think you want). At least it would be pleasant, and nobody would complain.

This question is a bit off-topic and I have a feeling it has been covered in a batch of comments elsewhere so if it has, would someone mind directing me to it. My question is this: Given the existence of the multiverse, shouldn't there be some universe out there in which an AI has already gone FOOM? If it has, wouldn't we see the effects of it in some way? Or have I completely misunderstood the physics?

And Eliezer, don't lie, everybody wants to rule the world.

Okay, you don't disapprove. Then consider the question one of curiosity. If Tyler Cowen acquired a Ring of Power and began gathering a circle of advisors, and you were in that circle, what specific advice would you give him?

Eliezer, I'd advise no sudden moves; think very carefully before doing anything. I don't know what I'd think after thinking carefully, as otherwise I wouldn't need to do it. Are you sure there isn't some way to delay thinking on your problem until after it appears? Having to have an answer now when it seems an likely problem is very expensive.

What about a kind of market system of states? The purpose of the states will be will be to provide a habitat matching each citizen's values and lifestyle?

-Each state will have it's own constitution and rules.
-Each person can pick the state they wish to live in assuming they are accepted in based on the state’s rules.
-The amount of resources and territory allocated to each state is proportional to the number of citizens that choose to live there.
-There are certain universal meta-rules that supercede the states' rules such as...
-A citizen may leave a state at any time and may not be held in a state against his or her will.
-No killing or significant non-consensual physical harm permitted; at most a state could permanently exile a citizen.
-There are some exceptions such as the decision power of children and the mentally ill.
-Etc.

Anyways, this is a rough idea of what I would do with unlimited power. I would build this, unless I came across a better idea. In my vision, citizens will tend to move into states they prefer and avoid states they dislike. Over time good states will grow and bad states will shrink or collapse. However states could also specialize and for example, you could have a small state with rules and a lifestyle just right for a small dedicated population. I think this is an elegant way of not imposing a monolithic "this is how you should live" vision on every person in the world yet the system will still kill bad states and favor good states whatever those attractors are.

P.S. In this vision I assume the Earth is "controlled"(meta rules only) by a singleton super-AI with nanotech. So we don't have to worry about things like crime(forcefields), violence(more forcefields) or basic necessities such as food.

I'm glad to hear that you aren't trying to take over the world. The less competitors I have, the better.

@lowly undergrad

Perhaps you're thinking of The Great Filter (http://hanson.gmu.edu/greatfilter.html)?

"Eliezer, I'd advise no sudden moves; think very carefully before doing anything."

But about 100 people die every minute!

PK: I like your system. One difficulty I notice is that you have thrust the states into the role of the omniscient player in the Newcomb problem. Since the states are unable to punish the members beyond expelling them. They are open to 'hit and run' tactics. They are left with the need to predict accurately which members and potential members will break a rule, 'two box', and be a net loss to the state with no possibility of punishment. They need to choose people who can one box and stay for the long haul. Executions and life imprisonment are simpler, from a game theoretic perspective.

James, it's ok. I have unlimited power and unlimited precision. I can turn back time. At least, I can rewind the state of the universe such that you can't tell the difference (http://www.overcomingbias.com/2008/05/timeless-physic.html).

Tangentally, does anyone know what I'm talking about if I lament how much of Eleizer's thought stream ran through my head, prompted by Sparhawk?

Eliezer: Let's say that someone walks up to you and grants you unlimited power.

Lets not exaggerate. A singleton AI wielding nanotech is not unlimited power; it is merely a Big Huge Stick with which to apply pressure to the universe. It may be the biggest stick around, but it's still operating under the very real limitations of physics - and every inch of potential control comes with a cost of additional invasiveness.

Probably the closest we could come to unlimited power, would be pulling everything except the AI into a simulation, and allowing for arbitrary amounts of computation between each tick.

Billy Brown: If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet, and those who valued those philosophies will curse you.

It's probably not the worst tradeoff, being cursed only by those who feel their values should take precedence over those of other people.

> But about 100 people die every minute!

If you have unlimited power, and aren't constrained by current physics, then you can bring them back. Of course, some of them won't want this.

Now, if you have (as I'm interpreting this article) unlimited power, but your current faculties, then embarking on a program to bring back the dead could (will?) backfire.

I think Sparhawk was a fool. But you need to remember, internally he was basically medieval. Also externally you need to remember Eddings is only an English professor and fantasy writer.

> It's probably not the worst tradeoff, being cursed only by those who feel their values should take precedence over those of other people.

Why should your values take precedence over theirs? It sounds like you're asserting that tyranny > collectivism.

@Cameron: Fictional characters with unlimited power sure act like morons, don't they?

Singularitarians: The Munchkins of the real universe.

http://project-apollo.net/mos/mos190.html

:)

Sorry for being out of topic, but has that 3^^^^3 problem been solved already? I just read the posts and, frankly, I fail to see why this caused so much problems.

Among the things that Jaynes repeats a lot in his book is that the sum of all probabilities must be 1. Hence, if you put probabilities somewhere, you must remove elsewhere. What is the prior probability for "me being able to simulate/kill 3^^^^3 persons/pigs"? Let's call that nonzero number "epsilon". Now, I guess that the (3^^^^3)-1 case should have a probability greater or equal than epsilon, same for (3^^^^3)-2 etc. Even with a "cap" at 3^^^^3, this makes epsilon <= 1/(3^^^^3). And this doesn't consider the case "I fail to fulfill my threat and suddenly change into a sofa", let alone all the >=42^^^^^^^42 possible statements in that meta-multiverse. The integral should be one.

Now, the fact that I make said statement should raise the posterior probability to something larger than epsilon, depending on your trust in me etc, but the order of magnitude is at least small enough to cancel out the "immenseness" of 3^^^^3. Is it that simple or am I missing something?

Pierre, it is not true that all probabilities sum to 1. Only for an exhaustive set of mutually exclusive events must the probability sum to 1.

Sorry, I have not been specific enough. Each of my 3^^^^3, 3^^^^3-1, 3^^^^3-2, etc. examples are mutually exclusive (but the sofa is part of the "0" case). While they might not span all possibilities (not exhaustive) and could thus sum to less than one, they cannot sum to higher than 1. As I see it, the weakest assumption here is that "more persons/pigs is less or equally likely". If this holds, the "worst case scenario" is epsilon=1/(3^^^^3) but I would guess for far less than that.

To ask what God should do to make people happy, I would begin by asking whether happiness or pleasure are coherent concepts in a future in which every person had a Godbot to fulfill their wishes. (This question has been addressed many times in science fiction, but with little imagination.) If the answer is no, then perhaps God should be "unkind", and prevent desire-saturation dynamics from arising. (But see the last paragraph of this comment for another possibility.)

What things give us the most pleasure today? I would say, sex, creative activity, social activity, learning, and games.

Elaborating on sexual pleasure probably leads to wireheading. I don't like wireheading, because it fails my most basic ethical principle, which is that resources should be used to increase local complexity. Valuing wireheading qualia also leads to the conclusion that one should tile the universe with wireheaders, which I find revolting, although I don't know how to justify that feeling.

Social activity is difficult to analyze, especially if interpersonal boundaries, and the level of the cognitive hierarchy to relate to as a "person", are unclear. I would begin by asking whether we would get any social pleasure from interacting with someone whose thoughts and decision processes were completely known to us.

Creative activity and learning may or may not have infinite possibilities. Can we continue constructing more and more complex concepts, to infinity? If so, then knowledge is probably also infinite, for as soon as we have constructed a new concept, we have something new to learn about. If not, then knowledge - not specific knowledge of what you had for lunch today, but general knowledge - may be limited. Creative activity may have infinite possibilities, even if knowledge is finite.

(The answer to whether intelligence has infinite potential has many other consequences; notably, Bayesian reasoners are likely only in a universe in which there are finite useful concepts, because otherwise it will be preferable to be a non-Bayesian reasoning over more complex concepts using faster algorithms.)

Games largely rely on uncertainty, improving mastery, and competition. Most of what we get out of "life", besides relationships and direct hormonal pleasures like sex, food, and fighting, is a lot like what we get from playing a game. One fear is that life will become like playing chess when you already know the entire game tree.

If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge. A future of endless war may be preferable to a future in which someone has won. It may even be preferable to a future of endless peace. If you study the middle ages of Europe, you will probably at some point ask, "Why did these idiots spend so much time fighting, when they could have all become wealthier if they simply stopped fighting long enough for their economies to grow?" Well, those people didn't know that economies could grow. They didn't believe that there was any further progress to be made in any domain - art, science, government - until Jesus returned. They didn't have any personal challenges; the nobility often weren't even allowed to do work. If you read what the nobles wrote, some of them said clearly that they fought because they loved fighting. It was the greatest thrill they ever had. I don't like this option for our future, but I can't rule out the possibility that war might once again be preferable to peace, if there actually is no more progress to be made and nothing to be done.

The answers to these questions also have a bearing on whether it is possible for God, in the long run, to be selfish. It seems that God would be the first person to have his desires saturated, and enter into this difficult position where it is hard to imagine how to want anything. I can imagine a universe, rather like the Buddhist universe, in which various gods, like bubbles, successively float to the top, and then burst into nothingness, from not caring anymore. I can also imagine an equilibrium, in which there are many gods, because the greater power than one acquires, the less interest one has in preserving that power.

"But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it."

"If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?"

"Say, the programming team has cracked the "hard problem of conscious experience" in sufficient depth that they can guarantee that the AI they create is not sentient - not a repository of pleasure, or pain, or subjective experience, or any interest-in-self - and hence, the AI is only a means to an end, and not an end in itself."

"What is individually a life worth living?"

Really, is not the ultimate answer to the whole FAI issue encoded there?

IMO, the most important thing about AI is to make sure IT IS SENTIENT. Then, with very high probability, it has to consider the very same questions suggested here.

(And to make sure it does, make more of them and make them diverse. Majority will likely "think right" and supress the rest.)

Phil:

"If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge."

This is inconsistent. What would conflict really do is to provide new information to process ("knowledge").

I guess I can agree with the rest of post. What IMO is worth pointing out that the most pleasures, hormones and insticts excluded, are about processing 'interesting' infromations.

I guess, somewhere deep in all sentient beings, "interesting informations" are the ultimate joy. This has dire implications for any strong AGI.

I mean, the real pleasure for AGI has to be about acquiring new information patterns. Would not it be a little bit stupid to paperclip solar system in that case?

Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?

Pierre, the proposition, "I am able to simulate 3^^^^3 people" is not mutually exclusive with the proposition "I am able to simulate 3^^^^3-1 people."

If you meant to use the propositions D_N: "N is the maximum number of people that I can simulate", then yes, all the D_N's would be mutually exclusive. Then if you assume that P(D_N) ≤ P(D_N-1) for all N, you can indeed derive that P(D_3^^^^3) ≤ 1/3^^^^3. But P("I am able to simulate 3^^^^3 people") = P(D_3^^^^3) + P(D_3^^^^3+1) + P(D^^^^3+2) + ..., which you don't have an upper bound for.

An expected utility maximizer would know exactly what to do with unlimited power. Why do we have to think so hard about it? The obvious answer is that we are adaptation executioners, not utility maximizers, and we don't have an adaptation for dealing with unlimited power. We could try to extrapolate an utility function from our adaptations, but given that those adaptations deal only with a limited set of circumstances, we'll end up with an infinite set of possible utility functions for each person. What to do?

James D. Miller: But about 100 people die every minute!

Peter Norvig: Refusing to act is like refusing to allow time to pass.

What about acting to stop time? Preserve Earth at 0 kelvin. Gather all matter/energy/negentropy in the rest of the universe into secure storage. Then you have as much time as you want to think.

"Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?"

It is not about what YOU define as right.

Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also *believe* that more general intelligence make GI converge to such "right thinking".

What makes me worry most is building GAI as non-sentient utility maximizer. OTOH, I *believe* that 'non-sentient utility maximizer' is mutually exclusive with 'learning' strong AGI system - in other words, any system capable of learning and exceeding human inteligence must outgrow non-sentience and utility maximizing. I migh be wrong, of course. But the fact that universe is not paperclipped yet makes me hope...

Wei,

Is it any safer to think ourselves about how to extend our adaptation-executer preferences than to program an AI to figure out what conclusions we would come to, if we did think a long time?

I'm thinking here of studies I half-remember about people preferring lottery tickets whose numbers they made up to randomly chosen lottery tickets, and about people thinking themselves safer if they have the steering wheel than if equally competent drivers have the steering wheel. (I only half-remember the studies; don't trust the details.) Do you think a bias like that is involved in your preference for doing the thinking ourselves, or is there reason to expect a better outcome?

Robin wrote: Having to have an answer now when it seems an likely problem is very expensive.

(I think you meant to write "unlikely" here instead of "likely".)

Robin, what is your probability that eventually humanity will evolve into a singleton (i.e., not necessarily through Eliezer's FOOM scenario)? It seems to me that competition is likely to be unstable, whereas a singleton by definition is. Competition can evolve into a singleton, but not vice versa. Given that negentropy increases as mass squared, most competitors have to remain in the center, and the possibility of a singleton emerging there can't ever be completely closed off. BTW, a singleton might emerge from voluntary mergers, not just one competitor "winning" and "taking over".

Another reason to try to answer now, instead of later, is that coming up with a good answer would persuade more people to work towards a singleton, so it's not just a matter of planning for a contingency.

Wei Dai, singleton-to-competition is perfectly possible, if the singleton decides it would like company.

I quote:

"The young revolutionary's belief is honest. There will be no betraying catch in his throat, as he explains why the tribe is doomed at the hands of the old and corrupt, unless he is given power to set things right. Not even subconsciously does he think, "And then, once I obtain power, I will strangely begin to resemble that old corrupt guard, abusing my power to increase my inclusive genetic fitness."

"no sudden moves; think very carefully before doing anything" - doesn't that basically amount to an admission that human minds aren't up to this, that you ought to hurriedly self-improve just to avoid tripping over your own omnipotent feet?

This presents an answer to Eliezer's "how much self improvement?": there has to be some point at which the question "what to do" becomes fully determined and further improvement is just re-proving the already proven. So you improve towards that point and stop.

This is a general point concerning Robin's and Eliezer's disagreement. I'm posting it in this thread because this thread is the best combination of relevance and recentness.

It looks like Robin doens't want to engage with simple logical arguments if they fall outside of established, scientific frameworks of abstractions. Those arguments could even be damning critiques of (hidden assumptions in) those abstractions. If Eliezer were right, how could Robin come to know that?

I think Robin's implied suggestion -- to not be so quick to discard the option of building an AI that can improve itself in certain ways but not to the point of needing to hardcode something like Coherent Extrapolated Volition. Is it really impossible to make an AI that can become "smarter" in useful ways (including by modifying its own source code, if you like), without it ever needing to take decisions itself that have severe nonlocal effects? If intelligence is an optimization process, perhaps we can choose more carefully what is being optimized until we are intelligent enough to go further.

I suppose one answer is that other people are on the verge of building AIs with unlimited powers so there is no time to be thinking about limiting goals and powers and initiative. I don't believe it, but if true we really are hosed.

It seems to me that if reasoning leads us to conclude that building self-improving AIs is a million-to-one shot to not destroy the world, we could consider not doing it. Find another way.

Oops, Julian Morrison already said something similar.

Just a note of thanks to Phil Goetz for actually considering the question.

What if creating a friendly AI isn't about creating a friendly AI?

I may prefer Eliezer to grab the One Ring over others who are also trying to grab it, but that does not mean I wouldn't rather see the ring destroyed, or divided up into smaller bits for more even distribution.

I haven't met Eliezer. I'm sure he's a pretty nice guy. But do I trust him to create something that may take over the world? No, definitely not. I find it extremely unlikely that selflessness is the causal factor behind his wanting to create a friendly AI, despite how much he may claim so or how much he may believe so. Genes and memes do not reproduce via selflessness.

I am following your blog for a while, and find it extremely entertaining and also informative.

However, some criticism:

1. You obviously suffer from what Nassim Taleb calls “ludic fallacy”. That is, applying “perfect” mathematical and logical reasoning to a highly “imperfect” world. A more direct definition would be “linear wishful thinking” in an extremely complex, non-linear environment.

2. It is admirable that one can afford to indulge in such conversations, as you do. However, bear in mind that the very notion of self you imply in your post is very, very questionable (Talking about the presentation of self in everyday life, Erving Goffman once said: “when the individual is in the immediate presence of others, his activity will have a promissory character.” Do you, somehow, recognize yourself? ;) ).

3. Being humble is so difficult when one is young and extremely intelligent. However, bear in mind that in the long run, what matters is not who will rule the world, or even whether one will get the Nobel Prize. What matters is the human condition. Bearing this in mind will not hamper your scientific efforts, but will provide you with much more ambiguity – the right fertilizer for wisdom.

Peter de Blanc: You are right and I came to the same conclusion while walking this morning. I was trying to simplify the problem in order to easily obtain numbers <=1/(3^^^^3), which would solve the "paradox". We now agree that I oversimplified it.

Instead of messing with a proof-like approach again, I will try to clarify my intuition. When you start considering events of that magnitude, you must consider _a lot_ of events (including waking up with blue tentacles as hands to take Eliezer's example). The total probability is limited to 1 for exclusive events. Without proof, there is no reason to put more probability there than anywhere else. There is not much proof for a device exterior to our universe that can "read" our choice (giving five dollars or not) and then perform said claim. I don't think that's even falsifiable "from our universe".

If the claim is not falsifiable, the AI should not accept unless I do something "impossible" from its current framework of thinking. A proof request that I am thinking of is to do some calculations with the order 3^^^^3 computer and shares easily verifiable results that would otherwise take longer than the age of the universe to obtain. The AI could also ask "simulate me and find a proof that would suit me". Once the AI is convinced, it could also throw in another five dollars and ask for some algorithm improvements that would require billion years to achieve otherwise. Or for an ssh access on the 3^^^^3 computer.

Wei, yes I meant "unlikely."
Bo, you and I have very different ideas of what "logical" means.
V.G., I hope you will comment more.

Grant:
We did not evolve to handle this situation. It's just as valid to say that we have an opportunity to exploit Elizer's youthful evolved altruism, get him or others like him to make an FAI, and thereby lock himself out of most of the potential payoff. Idealists get corrupted, but they also die for their ideals.

I have been granted almighty power, constrained only by the most fundamental laws of reality (which may, or may not, correspond with what we currently think about such things).

What do I do? Whatever it is that you want me to do. (No sweat off my almighty brow.)

You want me to kill thy neighbour? Look, he's dead. The neighbour doesn't even notice he's been killed ... I've got almighty power, and have granted his wish too, which is to live forever. He asked the same about you, but you didn't notice either.

In a universe where I have "almighty" power, I've already banished all contradictions, filled all wishes, made everyone happy or sad (according to their desires), and am now sipping Laphroaig, thinking, "Gosh, that was easy.".

Anna Salamon wrote: Is it any safer to think ourselves about how to extend our adaptation-executer preferences than to program an AI to figure out what conclusions we would come to, if we did think a long time?

First, I don't know that "think about how to extend our adaptation-executer preferences" is the right thing to do. It's not clear why we should extend our adaptation-executer preferences, especially given the difficulties involved. I'd backtrack to "think about what we should want".

Putting that aside, the reason that I prefer we do it ourselves is that we don't know how to get an AI to do something like this, except through opaque methods that can't be understood or debugged. I imagine the programmer telling the AI "Stop, I think that's a bug." and the AI responding with "How would you know?"

g wrote: Wei Dai, singleton-to-competition is perfectly possible, if the singleton decides it would like company.

In that case the singleton might invent a game called "Competition", with rules decided by itself. Anti-prediction says that it's pretty unlikely those rules would happen to coincide with the rules of base-level reality, so base-level reality would still be controlled by the singleton.

If living systems can unite, they can also be divided. I don't see what the problem with that idea could be.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31