« The surprising power of rote cognition | Main | Predictible Fakers »

January 19, 2009

Comments

We might even cooperate in the Prisoner's Dilemma. But we would never be friends with them. They would never see us as anything but means to an end. They would never shed a tear for us, nor smile for our joys. And the others of their own kind would receive no different consideration, nor have any sense that they were missing something important thereby.

...but beware of using that as a reason to think of them as humans in chitin exoskeletons :-)

This may be a repurposing of a hunting behavior - to kill an X you have to think like an X.

I don't think a merely unsympathetic alien need be amoral or dishonest - they might have worked out a system of selfish ethics or a clan honor/obligation system. They'd need something to stop their society atomizing. They'd be nasty and merciless and exploitative, but it's possible you could shake appendages on a deal and trust them to fulfill it.

What would make a maximizer scary is that its prime directive completely bans sympathy or honor in the general case. If it's nice, it's lying. If you think you have a deal, it's lying. It might be lying well enough to build a valid sympathetic mind as a false face - it isn't reinforced by even its own pain. If you meet a maximizer, open fire in lieu of "hello".

Julian,

Agreed. Utilitarians are not to be trusted.

kekeke

So "good" creatures have a mechanism which simulates the thoughts and feelings of others, making it have similar thoughts and feelings, whether they are pleasant or bad. (Well, we have a "but this is the Enemy" mode, some others could have a "but now it's time to begin making paperclips at last" mode...)

For me, feeling the same seems to be much more important. (See dogs, infants...) So thinking in AI terms, there must be a coupling between the creature's utility function and ours. It wants us to be happy in order to be happy itself. (Wireheading us is not sufficient, because the model of us in its head would feel bad about it, unchanged in the process... it's some weak form of CEV.)

So is an AI sympathetic if it has this coupling in its utility function? And with whose utilities? Humans? Sentient beings? Anything with an utility function? Chess machines? (Losing makes them really really sad...) Or what about rocks? Utility functions are just a way to predict some parts of the world, after all...

My point is that a definition of sympathy also needs a function to determine who or what to feel sympathy for. For us, this seems to be "everyone who looks like a living creature or acts like one", but it's complicated in the same way as our values. Accepting "sympathy" and "personlike" for the definition of "friendly" could be easily turtles all the way down.

Julian Morrison: They'd need something to stop their society atomizing.

Assuming they had a society. To have society you need:

1. lots of independent actors with their own goals.

2. interdependence, i.e. the possibility of beneficial interaction between the actors.

What if an alien life form was something like an ant-colony? If there was only one breeder in the colony, the "queen", all the sterile members of the colony could only rfacilitate the passing on of their genes by co-operating with the queen and the colony's hierarchy. They'd be no reason for them to evolve anything like a desire for independence. (If fact most colony members would have few desires other than to obey their orders and keep their bodies in functional shape). They would have no more independence than the cells in my liver do.

So an "ant colony" type of intelligence would have no society in this sense. On of the big flaws in Speaker For The Dead is that the Hive Queen is depicted with the ability to feel empathy, something that evoloution wouldn't havce given it. Instead it would see other life forms as potentially-useful and potentially-harmful machines with levers on them. Even the war with th humans wouldn't make the Hive Queen think of us as an enemy; to them it would be more like clearing a field of weeds or eradicating smallpox.

"To a paperclip maximizer, the humans are just machines with pressable buttons. No need to feel what the other feels - if that were even possible across such a tremendous gap of internal architecture. How could an expected paperclip maximizer "feel happy" when it saw a human smile? "Happiness" is an idiom of policy reinforcement learning, not expected utility maximization. A paperclip maximizer doesn't feel happy when it makes paperclips, it just chooses whichever action leads to the greatest number of expected paperclips. Though a paperclip maximizer might find it convenient to display a smile when it made paperclips - so as to help manipulate any humans that had designated it a friend."

Correct me if I'm wrong, but haven't you just pretty accurately described a human sociopath?

This was my problem reading C.J. Cherryh's Foreigner. Not that the protagonist kept making the mistake of expecting the aliens to have human emotions: that they sometimes did seem to act on human emotions that they lacked the neurology for. Maybe there is justification later in the series, but it seemed like a failure to fully realize an alien psychology, quite likely for the difficulties that would cause in relating it to a human audience.

Contrary to Cabalamat, I think empathy was explained for the Hive Queen, in the history of establishing cooperation between queens. The first one to get the idea even practiced selective breeding on its own species until it found another that could cooperate. Or maybe the bits about empathizing with other minds (particularly human minds) was just a lie to manipulate the machine-with-levers that almost wiped out its species.

Julian, unsympathetic aliens might well develop an instinct to keep their promises. I happen to think that even paperclip maximizers might one-box on Newcomb's Problem (and by extension, cooperate on the true one-shot Prisoner's Dilemma with a partner who they believe can predict their decision). They just wouldn't like each other, or have any kind of "honor" that depends on imagining yourself in the other's shoes.

Latanius, a Friendly AI the way I've described it is a CEV-optimizer, not something that feels sympathetic to humans. Human sympathy is one way of being friendly; it's not the only way or even the most reliable way. For FAI-grade problems it would have to be exactly the right kind of sympathy at exactly the right kind of meta-level for exactly the right kind of environmental processes that, as it so happens, work extremely differently from the AI. If the optimizer you're creating is not a future citizen but a nonsentient means to an end, you just write a utility function and be done with it.

Mike Blume, the hypothesis would be "human sociopaths have empathy but not sympathy".

The core of most of my disagreements with this article find their most concentrated expression in:

"Happiness" is an idiom of policy reinforcement learning, not expected utility maximization.

Under Omohundro's model of intelligent systems, these two approaches converge. As they do so, the reward signal of reinforcement learning and the concept of expected utility also converge. In other words, it is rather inappropriate to emphasize the differences between these two systems as though it was a fundamental one.

There are differences - but they are rather superficial. For example, there is often a happiness "set point", for example - whereas that concept is typically more elusive for an expected utility maximizer. However, the analogies between the concepts are deep and fundamental: an agent maximising its happiness is doing something deeply and fundamentally similar to an agent maximising its expected utility. That becomes obvious if you substitute "happiness" for "expected utility".

In the case of real organisms, that substitution is doubly appropriate - because of evolution. The "happiness" function is not an arbitrarily chosen one - it is created in such a way that it converges closely on a function that favours behaviour resulting in increased expected ancestral representation. So, happiness gets an "expectation" of future events built into it automatically by the evolutionary process.

Zubon: I think empathy was explained for the Hive Queen, in the history of establishing cooperation between queens. The first one to get the idea even practiced selective breeding on its own species until it found another that could cooperate.

You may be right -- it's some time since I read the book.

Mirror neurons and the human empathy-sympathy system play a central role in my definition of consciousness, sentience and personhood or rather my dissolving the question of what is consciousness, sentience and personhood.

But if human sociopaths lack sympathy that doesn't prevent US from having sympathy for THEM at all. Likewise, it's not at all obvious that we CAN have sympathy for aliens with completely different cognitive architecture even if they have sympathy for one another. An octopus is intelligent, but if I worry about it's pain I think that I am probably purely anthropomorphizing.

Oh, and it also probably models the minds of onlookers by reference to its own mind when deciding on a shape and color for camouflage, which sounds like empathy.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31