« Nonsentient Bloggers | Main | Friendship is Relative »

December 28, 2008

Comments

This is all predicated on the assumption that "sentience" automatically results in moral rights. I would say that moral rights are fundamentally based on empathy, which is subjective -- we give other people moral rights in order to secure those rights for ourselves.

I think the vast majority of the population would have no problem with "apartheid" or "genocide" of sentient AIs or chimps. As a secular humanist, I would reluctantly agree with them. Like it or not, at some level my morality boils down to an emotional attachment to humanity, and transferring that attachment to non-humans would be a big leap.

There are obvious parallels to the evolution of racial attitudes, and maybe someday "humanist" will join "racist" as a pejorative. If that happens, so be it, but I think that change is a long ways away.

Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?

given that the vast majority of possible futures are significantly worse than this, I would be pretty happy with this outcome. but what happens when we've filled the universe? much like the board game risk, your attitude towards your so called allies will abruptly change once the two of you are the only ones left.

Tim:

Eliezer was using "sentient" practically as a synonym for "morally significant". Everything he said about the hazards of creating sentient beings was about that. It's true that in our current state, our feelings of morality come from empathic instincts, which may not stretch (without introspection) so far as to feel concern for a program which implements the algorithms of consciousness and cognition, even perhaps if it's a human brain simulation. However, upon further consideration and reflection, we (or at least most of us, I think) find that a human brain simulation is morally significant, even though there is much that is not clear about the consequences. The same should be true of a consciousness that isn't in fact a simulation of a human, but of course determining what is and what is not conscious is the hard part.

It would be a mistake to create a new species that deserves our moral consideration, even if at present we would not give it the moral consideration it deserves.

Some people take "satisficing, instead of maximizing" a little too far.

Shouldn't this outcome be something the CEV would avoid anyway? If it's making an AI that wants what we would want, then it should not at the same time be making something we would not want to exist.

Also, I think it is at least as possible that on moral reflection we would consider all mammals/animals/life as equal citizens. So we may already be outvoted.

I think we're all out of our depth here. For example, do we have an agreed upon, precise definition of the word "sentient"? I don't think so.

I think that for now it is probably better to try to develop a rigorous understanding of concepts like consciousness, sentience, personhood and the reflective equilibrium of humanity than to speculate on how we should add further constraints to our task.

Nonsentience might be one of those intuitive concepts that falls to pieces upon closer examinations. Finding "nonperson predicates" might be like looking for "nonfairy predicates".

I think it's worth noting that truly unlimited power means being able to undo anything. But is it wrong to rewind when things go south? if you rewind far enough you'll be erasing lives and conjuring up new different ones. Is rewinding back to before an AI explodes into a zillion copies morally equivalent to destroying them in this direction of time? unlimited power is unlimited ability to direct the future. Are the lives on every path you don't choose "on your shoulders" so to speak?

So if we created a brain emulation that wakes up one morning (in a simulated environment), lives happily for a day, and then goes to bed after which the emulation is shut down, would that be a morally bad thing to do? Is it wrong? After all, living one day of happiness surely beats non-existence?

"these trillions of people also cared, very strongly, about making giant cheesecakes."

Uh oh. IMO, that is fallacy. You introduce quite reasonable scenario, then inject some nonsense, without any logic or explanation, to make it look bad.

You should better explain when, on the way from single sentient AI to voting rights fot trillions, cheesecakes came into play. Is it like all sentients being are automatically programmed to like creating big cheescakes? Or anything equally bizzarre?

Subtract cheescakes and your scenario is quite OK with me, including 0,1% of galaxy for humans and 99.9% for AIs. 0.1% of galaxy is about 200 millions of stars...

BTW, it is most likely that without sentient AI, there will be no human (or human originated) presence outside solar system anyway.

Well, so far, my understanding is that your suggestion is to create nonsentient utility maximizer programmed to stop research in certain areas (especially research in creating sentient AI, right?). Thanks, I believe I have a better idea.

luzr: The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not. Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.

The problems of morality seem to be quite tough, particularly when tradeoffs are involved. But I think in your scenario, Lightwave, I agree with you.

nazgulnarsil: I disagree about the "unlimited power", at least as far as practical consequences are concerned. We're not *really* talking about unlimited power here, only humanly unattainable incredible power, at most. So rewinding isn't necessarily an option. (Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.) Lives that are never lived should count morally similarly to how opportunity cost counts in economics. This means that probably, with sufficient optimization power, incredibly much better and worse outcomes are possible than any of the ones we ordinarily consider in our day-to-day actions, but the utilitarian calculation still works out.

roko: It's true that the discussion must be limited by our current ignorance. But since we have a notion of morality/goodness that describes (although imperfectly) what we want, and so far it has not proved to be necessarily incoherent, we should consider what to do based on our current understanding of it. It's true that there are many ways in which our moral/empathic instincts seem irrational or badly calibrated, but so far (as far as I know) each such inconsistency could be understood to be a difference between our CEV and our native mental equipment, and so we should still operate under the assumption that there is a notion of morality that is perfectly correct in the sense that it's invariant under further introspection. This is then the morality we should strive to live by. Now as far as I can tell, most (if not all) of morality is about the well-being of humans, and things (like brain emulations, or possibly some animals, or ...) that are like us in certain ways. Thus it makes sense to talk about morally significant or insignificant things, unless you have some reason why this abstraction seems unsuitable. The notion of "morally significant" seems to coincide with sentience.

But what if there is no morality that is invariant under introspection?

Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.

you can make entropy run in reverse in one area as long as a compensating amount of entropy is generated somewhere within the system. what do you think a refrigerator is? what if the extra entropy that needs to be generated in order to rewind is shunted off to some distant corner of the universe that doesn't affect the area you are worried about?
I'm not talking about literally making time go in reverse. You can achieve what is functionally the same thing by reversing all the atomic reactions within a volume and shunting the entropy generated by the energy you used to do this to some other area.

anon: "The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not."

I am quite aware of that. Anyway, using "cheescake" as placeholder adds a bias to the whole story.

"Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings."

Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that "big cheescake" in unlikely.

Thinking about it, AFAIK Eliezer considers himself rationalist. Is not a big part of rationalism involved in disputing values that are merely consequences of our long history?

I agree that it's not all-out impossible under the laws of thermodynamics, but I personally consider it rather unlikely to work on the scales we're talking about. This all seems somewhat tangential though; what effect would it have on the point of the post if "rewinding events" in a macroscopic volume of space was theoretically possible, and easily within the reach of a good recursively self-improving AGI?

luzr: Using anything but "cheesecake" as a placeholder adds a bias to the whole story, in that case.

luzr: The strength of an optimizing process (i.e. an intelligence) does not necessarily dictate, or even affect too deeply, its goals. This has been one of Eliezer's themes. And so a superintelligence might indeed consider incredibly valuable something that you wouldn't be interested in at all, such as cheesecake, or smiling faces, or paperclips, or busy beaver numbers. And this is another theme: rationalism does not demand that we reject values merely because they are consequences of our long history. Instead, we can reject values, or broaden them, or otherwise change our moralities, when sufficient introspection forces us to do so. For instance, consider how our morality has changed to reject outright slavery; after sufficient introspection, it does not seem consistent with our other values.

what effect would it have on the point

if rewinding is morally unacceptable (erasing could-have-been sentients) and you have unlimited power to direct the future, does this mean that all the could-have-beens from futures you didn't select are on your shoulders? This is directly related to another recent post. If I choose a future with less sentients who have a higher standard of living am I responsible for the sentients that would have existed in a future where I chose to let a higher number of them be created?
If you're a utilitarian this is *the* delicate point. at what point are two sentients with a certain happiness level worth one sentient with a higher happiness level?
Does a starving man steal bread to feed his family? This turns into: Should we legitimize stealing from the baker to feed as many poor as we can?


Most of our choices have this sort of impact, just on a smaller scale. If you contribute a real child to the continuing genetic evolution process, if you contribute media articles that influence future perceptions, if you contribute techs that change future society, you are in effect adding to and changing the sorts of people there are and what they value, and doing so in ways you largely don't understand.

A lot of futurists seem to come to a similar point, where they see themselves on a runaway freight train, where no one is in control, knows where we are going, or even knows much about how any particular track switch would change where we end up. They then suggest that we please please slow all this change down so we can stop and think. But that doesn't seem a remotely likely scenario to me.

the difference between reality and this hypothetical scenario is where control resides. I take no issue with the decentralized future roulette we are playing when we have this or that kid with this or that person. all my study of economics and natural selection indicates that such decentralized methods are self-correcting. in this scenario we approach the point where the future cone could have this or that bit snuffed by the decision of a singleton (or a functional equivalent), advocating that *this* sort of thing be slowed down so that we can weigh the decisions carefully seems prudent. isn't this sort of the main thrust of the friendly AI debate?

"please please slow all this change down"

No way no how. Bring the change on, baby. Bring.It.On.

For those who complain about being on your toes all the time, I say take ballet.

I'd agree with the sentiment in this post. I'm interested in building artificial brain stuff, more than building Artificial People. That is a computational substrate that allows the range of purpose-oriented adaptation shown in the brain, but with different modalities. Not neurally based, because simulating neural systems on a system where processing and memory is split defeats the majority of the point of them for me.

Democracy is a dumb idea. I vote for aristocracy/apartheid. Considering the disaster of the former Rhodesia, currently Zimbabwe, and the growing similarities in South Africa, the actual historical apartheid is starting to look pretty good. So I agree with Tim M, except I'm not a secular humanist.

I'm not sure I understand how sentience has anything to do with anything (even if we knew what it was). I'm sentient, but cows would continue to taste yummy if I thought they were sentient (I'm not saying I'd still eat them, of course).

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.

Anon: "The notion of "morally significant" seems to coincide with sentience."

Yes; the word "sentience" seems to be just a placeholder meaning "qualifications we'll figure out later for being thought of as a person."

Tim: Good point, that people have a very strong bias to associate rights with intelligence; whereas empathy is a better criterion. Problem being that dogs have lots of empathy. Let's say intelligence and empathy are both necessary but not sufficient.

James: "Shouldn't this outcome be something the CEV would avoid anyway? If it's making an AI that wants what we would want, then it should not at the same time be making something we would not want to exist."

CEV is not a magic "do what I mean" incantation. Even supposing the idea were worked out, before the first AI is built, you probably don't have a mechanism to implement it.

anon: "It would be a mistake to create a new species that deserves our moral consideration, even if at present we would not give it the moral consideration it deserves."

Something is missing from that sentence. Whatever you meant, let's not rule out creating new species. We should, eventually.

Eliezer: Creating new sentient species is frightening. But is creating new non-sentient species less frightening? Any new species you create may out-compete the old and become the dominant lifeform. It would be the big lose to create a non-sentient species that replaced sentient life.

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.

And it doesn't consider it significant that this one hack that boosts IQ by 100 points makes us miserable/vegetables/sadists/schizophrenic/take your pick. Or think that it should have asked before turning the rest of the solar system into computronium. And, of course, it won't hold with the existence of anything intelligent enough to potentially turn it off, and so on....

The Hidden Complexity of Wishes

Nick, thats why I said non-coercively (though looking back on it, that may be a hard thing to define for a super-intelligence that could easily trick humans into becoming schizophrenic geniuses). But isn't that a problem with any self-modifying AI? The directive "make yourself more intelligent" relies on definitions of intelligence, sanity, etc. I don't see why it would be any more likely to screw up human intelligence than its own.

If the survival of the human race is one's goal, I wouldn't think keeping us at our current level of intelligence is even an option.

Offering someone a pill that'll make them a schizophrenic genius, without telling them about the schizophrenia part, doesn't even fall under most (any?) ordinary definitions of "coercion". (Which vary enough to have whole opposing political systems be built on them – if I'm dependent on employment to eat, am I working under coercion?)

An AI improving itself has a clear definition of what not to mess with – its current goal system.

Nick,

Understood; though I'd call fraud coercion, the use of the word is a side-issue here. However, an AI improving humans could have an equally clear view of what not to mess with: their current goal system. Indeed, I think if we saw specialized AIs that improved other AIs, we'd see something like this anyway. The improved AI would not agree to be altered unless doing so furthered its goals; i.e. the improving was unlikely to alter its goal system.

Not telling people about harmful side-effects that they don't ask about wasn't considered fraud when all the food companies failed to inform the public about Trans Fats, as far as I can tell. At the least, their management don't seem to be going to jail over it. Not even the cigarette executives are generally concerned about prison time.

I agree with Phil; all else equal I'd rather have whatever takes over be sentient. The moment to pause is when you make something that takes over, not so much when you wonder if it should be sentient as well.

Implementing an algorithm is simpler than optimizing for morality: you have all kinds of equivalence at your disposal, you can undo anything. If the first AI doesn't itself contribute any moral content, you (or it) is free to renormalize it in any way, recreating it the way it was supposed to be built, as opposed to the way it was actually built, experimenting with its implementation, emulating its runs, and so on and so forth. If, on the other hand, its structure is morally significant, rebuilding might no longer be an option, and a final result may be worse than what it'd be possible to create starting from a morally blank slate (for the AI implementation). Morality is not time-reversible, and making a moral mistake at the point that is to guide the dynamic of moral growth for the future may be much more costly than it looks on the surface. Giving most of the universe for "paperclipping" with it being morally wrong to not give it away to the new mind is a real possibility, so we'd better avoid taking responsibility before understanding how reversible or irreversible the decision will turn out to be.

Sentience is one of the basic goods. If the sysop is non-sentient, then whatever computronium is used in the sysop is, WRT sentience, wasted.

If we suppose that intelligences have a power-law distribution, and the sysop is the one at the top, we'll find that it uses up something around 20% to 50% of the accessible universe's computronium.

That would be a natural (as in "expected in nature") distribution. But since the sysop needs to stay in charge, it will probably destroy any other AIs who reach the "second tier" of intelligence. So it will more likely have something like 70% - 90% of the universe's computronium.

Also, in this post-human world, there aren't large penalties for individuality. That is: In today's world, you can't add up 3 chimpanzee brains and get human-level intelligence. In the world of AIs, you probably can. This means that, to stay on top, the sysop will always need to reserve a majority of the universe's computronium for itself. Otherwise, the rest of the universe can gang up on it.

So creating a non-sentient sysop means cutting the amount of sentient life you can support by at least half.

I am uncomfortable with the notion that there is an absolute measure of whether (or to what degree) a particular entity is morally significant. It seems to touch on Eliezer's discarded idea of Absolute Morality. Is it an intrinsic property of reality whether a given entity has moral significance? If so, what other moral questions can be resolved Absolutely?

Isn't it possible, or even likely, that there is no Absolute measure of moral significance? If we accept that other moral questions do not have Absolute answers, why should this question be different?

Hal: Within a given 'moral reference frame', there is an absolute measure of significance.

Hal, while many of our moral categories do seem to be torturable by borderline cases, if we get to pick the system design, we can try to avoid a borderline case.

"Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we're doing and the implications of our actions."

That sounds like self-referential logic to me. What could possibly understand the implications of a new intelligence, except for a test run of the whole or part of that new intelligence.

I really like your site and your writings as it always seems to enrich my own thoughts on similar subjects. But I do find that I disagree with you on one point. I would just start writing the software to test out your theories, as the proof is in the pudding. Discussing logic and processes usual English is just so long winded and fuzzy. How can you know that anything is logically sound unless you just put it all together and see how it all lines up.

I'm sure you do write many formulas and test programs. I just mean in general I feel like your site would be enriched by many demos of your concepts say implemented in Javascript so that people could just run them as they would be embedded in your blog.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31