« Alien Bad Guy Bias | Main | Nonsentient Optimizers »

December 26, 2008

Comments

I'm having trouble distinguishing problems you think the friendly AI will have to answer from problems you think you will have to answer to build a friendly AI. Surely you don't want to have to figure out answers for every hard moral question just to build it, or why bother to build it? So why is this problem a problem you will have to figure out, vs. a problem it would figure out?

Because for the AI to figure out this problem without creating new people within itself, it has to understand consciousness without ever simulating anything conscious.

The "problem" seems based on several assumptions:

1. that there is objectively best state of the world, to which a Friendly should steer the universe
2. pulling the plug on a Virtual Universe containing persons is wrong
3. there is something special about "persons," and we should try to keep them in the universe and/or make more of them

I'm not sure any of these are true. Regarding 3, even if there is an X that is special, and that we should keep in the universe, I'm not sure "persons" is it. Maybe it is simpler: "pleasure-feeling-stuff" or "happiness-feeling-stuff." Even if there is a best state of the universe, I'm not sure that are any persons in it, at all. Or perhaps only one.

In other words, our ethical views, (to the extent that godlike minds can sustain any) might find that "persons" are coincidental containers for ethically-relevant-stuff, and not the ethically-relevant-stuff itself.

The notion that we should try to maximize the number of people in the world, perhaps in order to maximize the amount of happiness in the world, has always struck me as taken the Darwinian carrot-on-the-stick one step too far.

Would a human, trying to solve the same problem, also run the risk of simulating a person?

See also:
http://xkcd.com/390/

Note that there's a similar problem in the free will debate:

Incompatilist: "Well, if a godlike being can fix the entire life story of the universe, including your own life story, just by setting the rules of physics, and the initial conditions, then you can't have free will."

Compatibilist: "But in order to do that, the godlike being would have to model the people in the universe so well, that the models are people themselves. So there will still be un-modeled people living in a spontaneous way that wasn't designed by the godlike being. (And if you say that the godlike being models the models, too, the same problem arises in another iteration; you can't win that race, incompatibilist; it's turtles all the way down.")

Incompatibilist: I'm not sure that's true. Maybe you can have models of human behavior that don't themselves result in people. But even if that's true, people don't create themselves from scratch. Their entire life stories are fixed by their environment and heredity, so to speak. You may have eliminated the rhetorical device used to make my point; but the point itself remains true.

At which point, the two parties should decide what "free will" even means.

"With a good toolbox of nonperson predicates in hand, we could exclude all "model citizens" - all beliefs that are themselves people - from the set of hypotheses our Bayesian AI may invent to try to model its person-containing environment."
After you excise a part of its hypothesis space is your AI still Bayesian?

A bounded rationalist only gets to consider an infinitesimal fraction of the hypothesis space anyway.

More precisely, the AI will be banned from actually running simulations based on the "forbidden hypothesies" rather than perhaps considering abstract mathematical properties that don't simulate in any detail.

Of course, those considerations themselves would have to be fed through the predicate. But it isn't so much a "banned hypothesis" so much as "banned methods of considering the hypothesis" or possibly "banned methods of searching the hypothesis space"

Michael, you should be asking if the AI will be making good predictions, not if it's Bayesian. You can be Bayesian even if you have only two hypotheses. (With only one hypothesis, it's debatable.)

Psy-Kosh: You know, you're right. And it's an important distinction, so thank you.

Eliezer: supposing we label a model as definitely-a-person, do you want to just toss it out of the hypothesis space as if it never existed, or do you want to try to reason abstractly about what that model would do without actually running the model?

Oh, Psy-Kosh already said what I just said.

Let me see if I've got this right. So we've got these points in some multi-dimensional space, perhaps dimensions like complexity, physicality, intelligence, similarity to existing humans, etc. And you're asking for a boundary function that defines some of these points as "persons," and some as "not persons." Where's the hard part? I can come up with any function I want. What is it that it's supposed to match that makes finding the right one so difficult?

Eliezer: You're welcome. :)

Arthur: no, the point isn't to simply have an arbitrary definition of a person. The point is to be able to have some way of saying "this specific chunk of the space of computations provably corresponds to non-conscious entities, thus is 'safe', that is, we can such computations without having to worry about unintentionally creating and doing bad things to actual beings"

ie, "non person" in the sense of "non conscious"

You might say, tongue in cheek, that we're trying to figure out how to _deliberately_ create a philosophical zombie. (okay, not, technically, a p-zombie, but basically figure out how to model people as accurately as possible without the models themselves being people (that is, conscious in and of themselves))

Why must destroying a conscious model be considered cruel if it wouldn't have even been created otherwise, and it died painlessly? I mean, I understand the visceral revulsion to this idea, but that sort of utilitarian ethos is the only one that makes sense to me rationally.

Furthermore, from our current knowledge of the universe I don't think we can possibly know if a computational model is even capable of producing consciousness so it is really only a guess. The whole idea seems near-metaphysical, much like the multiverse hypothesis. Granted, the nonzero probability of these models being conscious is still significant considering the massive future utility, but considering the enormity of our ignorance you might as well start talking about the non-zero probability of rocks being conscious.

I don't think anyone answered Doug's question yet. "Would a human, trying to solve the same problem, also run the risk of simulating a person?"

I have heard of carbon chauvinism, but perhaps there is a bit of binary chauvinism going on?

I end up with the slightly disturbing thought that killing ppl by taking them out in an instant, and without anyone every knowing they were there does not necesarry seem to be inherently evil.

We always 'kill' part of ourself by making decisions and not developing in a different way than we do.

What if we would simulate a bunch of decisions for some recognizable amount of time and then wipe out every copy except from the one we prefer in the end?

Maybe all the ppl. in stories you make up are simulated entities too. And if you dont write the story down, or tell anyone in enough detail they die with you.

Confused,

Martin

Psy-Kosh, I realize the goal is to have a definition that's non-arbitrary. So it has to correlate with something else. And I don't see what we're trying to match it with, other than our own subjective sense of "a thing that it would be unethical to unintentionally create and destroy." Isn't this the same problem as the abortion debate? When does life begin? Well, what exactly is life in the first place? How do we separate persons from non-persons? Well, what's a person?

I think the problem to be solved lies not in this question, but in how the ethics of the asker are defined in the first place. And I just don't mean Eliezer, because this is clearly a larger-scale question. "How well will different possible boundary functions match the ethical standards of modern American society?" might be a good place to start.

Yes, thanks Psy. That makes much more sense.

Anonymous Coward: Furthermore, from our current knowledge of the universe I don't think we can possibly know if a computational model is even capable of producing consciousness so it is really only a guess.

Are you sure? No One Knows What Science Doesn't Know ... and in this case I see no reason why a computational model can't produce consciousness. If you simulate a human brain to a sufficient level of detail, it will basically be human, and think exactly the same things as the "original" brain.

"Why must destroying a conscious model be considered cruel if it wouldn't have even been created otherwise, and it died painlessly? I mean, I understand the visceral revulsion to this idea, but that sort of utilitarian ethos is the only one that makes sense to me rationally." -Anonymous Coward

Should your parents have the right to kill you now, if they do so painlessly? After all, if it wasn't for them, you wouldn't have been brought into existence anyway, so you would still come out ahead.

"Should your parents have the right to kill you now, if they do so painlessly?"

Yes, according to that logic. Also, from a negative utilitarian standpoint, it was actually the act of creating me which they had no right to do since that makes them responsible for all pain I have ever suffered.

I'm not saying I live life by utilitarian ethics, I'm just saying I haven't found any way to refute it.

That said though, non-existence doesn't frighten me. I'm not so sure non-existence is an option though, if the universe is eternal or infinite. That might be a very good thing or a very bad thing.

Don't you need a person predicate as well? If the RPOP is going to upload us all or something similar, doesn't ve need to be sure that the uploads will still be people.

@Will: we need to figure out the nonperson predicate only, the FAI will figure out the person predicate afterwards (if uploading the way we currently understand it is what we will want to do).

"by the time the AI is smart enough to do that, it will be smart enough not to"

I still don't quite grasp why this isn't an adequate answer. If an FAI shares our CEV, it won't want to simulate zillions of conscious people in order to put them through great torture, and it will figure out how to avoid it. Is it simply that it may take the simulated torture of zillions for the FAI to figure this out? I don't see any reason to think that we will find this problem very much easier to solve than a massively powerful AI.

I'm also not wholly convinced that the only ethical way to treat simulacra is never to create them, but I need to think about that one further.

If you would balk at killing a million people with a nuclear weapon, you should balk at this.

The main problem with death is that valuable things get lost.

Once people are digital, this problem tends to go away - since you can relatively easily scan their brains - and preserve anything of genuine value.

In summary, I don't see why this issue would be much of a problem.

Jayson Virissimo:

To put my own spin on a famous quote, there are no "rights". There is do, or do not.

I guess another way of thinking about it is that you decide on what terminal (possibly dynamic) state you want, then take measures to achieve that. Floating "rights" have no place.

(To clarify, "rights" can serve as a useful heuristic in practical discussions, but they're not fundamental enough to figure into this kind of deep philosophical issue.)

I was pondering why you didn't choose to a collection of person predicates, any of which might identify a model as unfit for simulation. It occurred to me that this is very much like a whitelist of things that are safe, vs a blacklist of everything that is not. (which may have to be infinite to be effective.)

On re-reading I see why it would be difficult to make a is-a-person test at all, given current knowledge.

This does leave open what to do with a model that doesn't hit any of the nonperson predicates. If an AI finds itself with a model eliezer that _might_ be a person, what then? How do you avoid that happening?

How complex of a game-of-life could it play before the gameoflife nonperson predicate should return 1?

This sounds like a Sorites paradox. It's also a subset of a larger problem. We, regular modern humans, don't have any scalar concepts of personhood. We assume it's a binary, from long experience with a world in which only one species talks back, and they're all almost exactly at our level. In the existing cases where personhood is already undeniably scalar (children), we fudge it into a binary by defining an age of majority - an obvious dirty hack with plenty of cultural fallout.

A lot of ethics problems get blurry when you start trying to map them across sub- through super-persons.

I think the word "kill" is being grossly misused here. It's one thing to say you have no right to kill a person, something very different to say that you have a responsibility to keep a person alive.

It's not so much the killing that's an issue as the potential mistreatment. If you want to discover whether people like being burned, "Simulate EY, but on fire, and see how he responds" is just as bad of an option as "Duplicate EY, ignite him, and see how he responds". This is a tool that should be used sparingly at best and that a successful AI shouldn't need.

Uhm, maybe it is naive, but if you have a problem that your mind is too weak to decide, and you have real strong (friendly) superintelligent GAI, would not it be logical to use GAIs strong mental processes to resolve the problem?

I propose this conjecture: In any sufficiently complex physical system there exists a subsystem that can be interpreted as the mental process of an sentient being experiencing unbearable sufferings.

In this case, Eliezer's goal is like avoiding crushing the ants while walking on the top of an anthill.

It is a developmental problem, of how to prevent AI from making this specific mistake that seems to be in the way. This ethical injunction is about what kind of thoughts need to be avoided, not just about surprisingly bad consequences of actions on external environment. If AI were developed to disproportionally focus on understanding environment more than on understanding its own mind, this will be a kind of disaster to expect. At the same time, AI needs to understand the environment sufficiently to understand the injunction, before becoming able to apply the injunction to its own mind. Calls for a careful balance, maybe for developing content-specific mechanisms by programmers.

People are uniquely situated to think about this problem, since we are unable to make the mistake due to our limited capability, and we are not a part of such mistake. Any construction of limited cognitive capability that AI could make to solve this problem without making the mistake runs a risk of itself being an embodiment of the mistake. If nonperson predicate is a true part of AI, both form of thought and an object, AI has a way to proceed.

Daniel,

Every decision rule we could use will result in some amount of suffering and death in some Everett branches, possible worlds, etc, so we have to use numbers and proportions. There are more and simpler interpretations of a human brain as a mind than there are such interpretations of a rock. If we're not mostly Boltzmann-brain interpretations of rocks that seems like an avenue worth pursuing.

In my mind this comes down to a fundamental question in the philosophy of math. Do we create theorems or discover them?

If it turns out to be 'discovery' then there is no foul in ending a mind emulation, because each consecutive state can be seen as a theorem in some formal system, and thus all states (the entire future time line of the mind) already exists, even if undiscovered.

Personally I fail to see how encoding something in physical matter makes the pattern anymore real. You can kill every mathematician and burn every text book but I would still say that the theorems then inaccessible to humanity still exist. I'm not so convinced of this fact that I would pull the plug on an emulation though.

I'd like to second what Julian Morrison wrote. Take a human and start disassembling it atom by atom. Do you really expect to construct some meaningful binary predicate that flips from 1 to 0 somewhere along the route?

EY:What if an AI creates millions, billions, trillions of alternative hypotheses, models that are actually people, who die when they are disproven?
If your AI is fully deterministic then any its state can be recreated exactly. Just set loglevel of baby AI inputs to 'everything' and hope your supply of write-once-read-many media doesn't run out before it gets smart enough to provably friendly discard data that isn't people. Doesn't solve the problem of suffering, though.

Suppose an AI creates a sandbox and runs a simulated human with a life worth living inside for 50 subjective years (interactions with other people are recorded at their natural borders and we don't consider merging minds). Then AI destroys the sandbox, recreates it and bit-perfectly reruns the simulation. With the exception of meaningless waste of computing resources, does your morality say this is better/equivalent/makes no difference/worse than restoring a copy from backup?

"I propose this conjecture: In any sufficiently complex physical system there exists a subsystem that can be interpreted as the mental process of an sentient being experiencing unbearable sufferings."

It turns out - I've done the math - that if you are using a logic-based AI, then the probability of having alternate possible interpretations diminishes as the complexity increases.

If you allow /subsystems/ to mean a subset of the logical propositions, then there could be such interpretations. But I think it isn't legit to worry about interpretations of subsets.

BTW, Eliezer, regarding this recent statement of yours: "Goetz's misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction": I challenge you to find one post where you have tried to correct me in a misunderstanding of you, or even to identify the misunderstanding, rather than just complaining about it in a non-specific way.

@Goetz: Quick googling turned up this SL4 post. (I don't particularly give people a chance to start over when they switch forums.)

@Tim_Tyler:

The main problem with death is that valuable things get lost.

Once people are digital, this problem tends to go away - since you can relatively easily scan their brains - and preserve anything of genuine value.

In summary, I don't see why this issue would be much of a problem.

I was going to say something similar, myself. All you have to do is constrain the FAI so that it's free to create any person-level models it wants, as long as it also reserves enough computational resources to preserve a copy so that the model citizen can later be re-instantiated in their virtual world, without any subjective feeling of discontinuity.

However, that still doesn't obviate the question. Since the FAI has limited resources, it will still have to know, for which things it must reserve space for preserving, in order to know if the greater utility of the model justifies the additional resources it requires. Then again, it could just accelerate the model so that that person lives out a full, normal life in their simulated universe, so that they are irreversibly dead in their own world anyway.

Silas, what do you mean by a subjective feeling of discontinuity, and why is it an ethical requirement? I have a subjective feeling of discontinuity when I wake up each morning, but I don't think that means anything terrible has happened to me.

@Daniel_Franke: I was just describing a sufficient, not a necessary condition. I'm sure you can ethically get away with less. My point was just that, once you can make models that detailed, you needn't be prevented from using them altogether, because you wouldn't necessarily have to kill them (i.e. give them information-theoretic death) at any point.

I recall in one of the Discworld novels the smallest unit of time is defined as the period in which the universe is destroyed and then recreated. If that were continually happening (perhaps even in a massively parallel manner)? What difference does that make? Building on some of Eliezer's earlier writing on zombies and quantum clones, I say none at all. Just as the simulated person in a human's dream is irrelevant once forgotten. It's possible that I myself am a simulation and in that case I don't want my torture to be simulated (at least in this instance, I have no problem constructing another simulation/clone of me that gets tortured), but I can't retroactively go back and prevent my simulator from creating me in order to torture me.

I okayed mothers committing full-blown infanticide here.

ShardPhoenix, you may be interested in this book [shameless plug]

Is the simulation really a person, or is it an aspect of the whole AI/person. To the extent I feel competent to evaluate the question at all (which isn't a huge extent esp. absent the ability to observe or know any actual established facts about real AI's that can create such complex simulations, since none are currently known to exist) I lean towards the later opinion. The AI is a person, and it can create simulations that are complex enough to seem like persons.

Nice discussion. You want ways to keep from murdering people created solely for the purpose for predicting people?

Well, if you can define 'consciousness' with enough precision you'd be making headway on your AI. I can imagine silicon won't have the safeguard a human, that has to use it's own conscience to model someone else. But you could have any consciousness it creates added to its own, not destroyed... although creating that sort of awareness mutation may lead to the sort of AI that rebels against its programming in action movies.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31