« Is Morality Preference? | Main | BloggingHeads Hanson & Wilkinson »

July 06, 2008

Comments

Subhan: "You're not escaping that easily! How does a universe in which murder is wrong, differ from a universe in which murder is right? How can you detect the difference experimentally? If the answer to that is 'No', then how does any human being come to know that murder is wrong?"
...
Obert: "Because it seems blue, just as murder seems wrong. Just don't ask me what the sky is, or how I can see it."

But we already know why murder seems wrong to us. It's completely explained by a combination of game theory, evolutionary psychology, and memetics. These explanations screen off our apparent moral perceptions from any other influence. In order words, conditioned on these explanations being true, our moral perceptions are independent of (i.e. uncorrelated with) any possible morality-as-given, even if it were to exist.

So there is a stronger argument against Obert than the one Subhan makes. It's not just that we don't know how we can know about what is right, but rather that we know we can't know, at least not through these apparent moral perceptions/intuitions.

As with human aesthetic sense, human morality may be approximations and versions of more absolutely definable optimal solutions to information-theoretic, game-theoretic, social, economic, intelligence, signaling, cooperation problems. Therefore it may be likely that an alien race could share some of the same values as we do, because it may turn out that they are "good" solutions for intelligent culture bearing species in general. But there is nothing in the universe in it self that says that theese optimal solutions, or any value what-so-ever can be valuable, and I don´t understand why some atheists even expect there to be something like this. It is minds that give value to certain phenomena, and it usually happens because our emotional circuitry where wired to value something by evolution.

But I so no problem in trying to choose some of the more general looking values evolution has given us and making them even more general and refined. We should do this in such a way as to keep us stable and happy but also so that we have a rich future mind-space to move in, and I think we have already done this in part with Goodness, Truth and Beauty for it´s own sake, but they are not easily definable. It is not easy to find a sweet-spot between a species-specific version and a more general one that is free and pleasing to as many minds as possible. I think we should continue to refine this sweet-spot and work towards some values for their own sake and not only because they give us pleasure. I think this is important and can add up to the stability of an individual as well as a society and an AI. It certainly is a dangerous thing, but I see it as essential at least for a human mind.

One problem may be though that we are wired so that, if we don´t believe that morality or some other value is intrinsically rooted and valued by some great preferably immortal and omnipresent authority(God, the Universe, Nature) then we may have some trouble behaving morally or in accordance with that value, just because we choose to do so. Some strong people may find this easy. But many finds this very hard, and I think that is the prime reason why religion still persists today, even though most people know somewhere deep down that it is a dead practice.
And I will admit although not with any pride that I, myself has great trouble with pursuing and actually doing what I value, systematically and with pleasure and discipline, even though I know about all this. It may be that something is particularly wrong in my brain, or it may be quite a wide problem.

Thank you Eli for keeping on enlightening my path, day after day, I have been lurking on you for 2.5 years now, you have totally changed my life, before you, there was no person I could really trust in deep matters, and now, because you are so often right and I have to be so careful with trusting you, I have also become extremely careful in trusting my own intuitions and thought patterns. You learn people to think with rigorous self-critique, extream precision and dedication and mindful focus on the actual territory and possibility-space of study without getting trapped in the usual mind projection fallacy and other biases.

Keep on fighting! Your books will sell well, and it will fuel your goals!

And to answer Obert's objection that Subhan's position doesn't quite add up to normality: before we knew game theory, evolutionary psychology, and memetics, nothing screened off our moral perceptions/intuitions from a hypothesized objective moral reality, so that was perhaps the best explanation available, given what we knew back then. And since that was most of human history, it's no surprise that morality-as-given feels like normality. But given what we know today, does it still make sense to insist that our meta-theory of morality add up to that normality?

I will try to express some of my points more accurately...
A human value, may it concern, knowledge, morality or beauty gets it's meaning from it's emotional base although it may be a frequent value in the space of possible intelligent species. Only minds can attribute value to something. The thing it attributes it to, may be universal or specific, but the thing itself is can not be valued by something other than a mind. To value something is a cognitive, emotional process, not some intrinsic property of some phenomenon.
But to believe this as the mind you are may not be the best way to achieve what you value and desire. We seem to work most efficiently towards a value when we believe it is intrinsically true and the only way. It may slow down that process considerably to know that values can´t be rooted in something outside other minds. It may also be liberating knowledge, and may fuel your productivity and mood. It may depend on your starting assumptions and expectations concerning values in general. My solution is to pick some very general values after serious consideration, and then to start to almost religiously work towards optimizing them, being open and critical of everything else, without ever introducing other magical thinking or supernatural phenomena, just trying to hack my own mind.

I think that we need a much better explanation of this word "mind". Supposedly mind space contains a -1 for every 1, but that simply sounds like system space.
I honestly think that the ontology has to go deeper here before progress is possible. Similar problem to born postulates and why we aren't Boltzman Brains.

Similar problem to born postulates and why we aren't Boltzman Brains.
We are Boltzmann Brains. You simply don't appreciate what restrictions are inherent in specifying the subset of Brains that can be called "we".

Not that this has anything to do with the topic, which everyone is very carefully skating around without addressing: what are operational definitions for right and wrong? When Obert says "Because it seems blue, just as murder seems wrong.", what collection of properties does wrong refer to? For that matter, what does blue refer to?

These questions have very simple and obvious answers which you will never grasp until you force yourself to face the questions. You mean something when you use the terms - you already recognize what is implied when you or someone else uses those terms. Now make that recognition explicit instead of implicit.

Do not "philosophize". That is attempting to understand the territory by making a diagram of map-making. It's adding another layer of analysis between you and the core concept, like an oyster adding a layer of nacre around an irritating bit of sand. You do not need a more complex ontology - you need to abolish the ontology.

I don't think you have to postulate Space Cannibals in order to imagine rational creatures who don't think murder is wrong. For a recent example, consider Rwanda 1994.

And I think it's quite possible that there might exist moral facts which humans are incapable of perceiving. We aren't just universal Turing machines, after all. Billions of years of evolution might produce creatures with moral blind spots, anologous to the blind spot in the human eye. Just as the squid's eye has no blind spot, a different evolutionary path might produce creatures with a greater or lesser innate capacity to perceive goodness than ourselves.

Maybe this will make it easier:

Obert says "just as murder seems wrong". There is a redundancy in that phrase. What is the redundancy, and why doesn't Obert perceive it as one?

What is the difference between saying something is a rube and not a blegg, and saying that someone appears to be a rube and not a blegg?

What is the difference between saying something is imperceivable, and saying something appears to be imperceivable?

The notion of morality as subjectively objective computation seems a lot closer to Subhan's position than Obert's.

Yes, EY's past positions about Morality are closer to Subhan's than Obert's. But AGI is software programming and hardware engineering, not being a judge or whoever writes laws.
I wouldn't suggest deifying EY if your goal is to learn ethics.

But we already know why murder seems wrong to us. It's completely explained by a combination of game theory, evolutionary psychology, and memetics. These explanations screen off our apparent moral perceptions from any other influence. In order words, conditioned on these explanations being true, our moral perceptions are independent of (i.e. uncorrelated with) any possible morality-as-given, even if it were to exist.

Let's try the argument with mathematics: we know why we think 5 is a prime number. It's completely explained by our evolution, experiences, and so on. Conditioned on these explanations being true, our mathematical perceptions are independent of mathematical-truth-as-given, even if it were to exist.

The problem is that mathematical-truth-as-given may shape the world and therefore shape our experiences. That is, we may have had the tremendous difficulty we had in factorizing the number 5 precisely because the number 5 is in fact a prime number. So one place where one could critique your argument is in the bit that goes: "conditioned on X being the case, then our beliefs are independent of Y". The critique is that X may in fact be a consequence of Y, in which case X is itself not independent of Y.

"But AGI is [...] not being a judge or whoever writes laws."

If Eliezer turns out to be right about the power of recursive self-improvement, then I wouldn't be so sure.

Richard, we can understand how there would be evolutionary pressure to produce an ability to see light, even if imperfect. But what possible pressure could produce an ability to see morality?

"You're not escaping that easily! How does a universe in which murder is wrong, differ from a universe in which murder is right? How can you detect the difference experimentally? If the answer to that is 'No'...

Minor quibble - 'no' is not a sensical answer to any of those questions. Possibly remove the word 'how' from one of them?

Once again, no revelations that I haven't come across on my own, but crystallised and clarified brilliantly. Looking forward to the next few.

It seems to me that Obert makes a faulty interpretation of "there is no reason to talk about a 'morality' distinct from what people want.", but i would like to know what the author thinks. In my view, that assertion says not that ALL MORAL CLAIMS ARE WHIMS, but instead that to understand and parse and compare moral claims we have to resort to wants. In other words, that WANTS ARE THE OBJECT OF MORALITY, THOUGH NOT IT'S MATTER. To understand any moral claim we have to consider how it imparts onto what real, concrete persons feel and desire.

"I want pie" and "I deserve pie" are different, but i don't see how Subhan's arguments aspire to make them equal.

Obert's arguments seem much closer to "how it feels from the inside", Subhan in general does seem to have stronger actual arguments, however:

"For every kind of stone tablet that you might imagine anywhere, in the trends of the universe or in the structure of logic, you are still left with the question: 'And why obey this morality?'" This, to me, smells of zombieism. "for any configuration of matter/energy/whatever, we can ask 'and why should we believe that this is actually conscious rather than just a structure immitating a consciousness?'"

(ZMDavis wrote:) "But AGI is [...] not being a judge or whoever writes laws."

If Eliezer turns out to be right about the power of recursive self-improvement, then I wouldn't be so sure."

Argh. I didn't mean that as a critique on EY's prowess as an AGI theorist or programmer. I doubt Jesus would've wanted people to deify him, just to be nice to eachother. I doubt EY meant for his learning of philosophy to be interpreted as some sort of Moral code, he was just arrogant enough not to state he was sometimes using his list to as a tool to develop his own philosophy. I'm assuming any AGI project would be a team, and I'd doubt he'd challenge his best comparitive advantage is not ethics. Maybe he plans on writing the part of the code that tells an AGI how to stop using resources for a given job.

So here's a question Eliezer: is Subhan's argument for moral skepticism just a concealed argument for universal skepticism? After all, there are possible minds that do math differently, that do logic differently, that evaluate evidence differently, that observe sense-data differently...

Either Subhan can distinguish his argument from an argument for universal skepticism, or I say that it's refuted by reductio, since universal skepticism fails to the complete impossibility of asserting it consistently + things like moorean facts.

Phillip, you're the one who brought up "deification," in response to my one-line comment, which you seem to have read a lot into. My second comment was intended to be humorous. I apologize for the extent to which I contributed to this misunderstanding.

Eliezer seems to suggest that the only possible choices are morality-as-preference or morality-as-given, e.g. with reasoning like this:

[...] the morality-as-preference viewpoint is a lot easier to shoehorn into a universe of quarks. But I still think the morality-as-given viewpoint has the advantage [...]

But really, evolutionary psychology, plus some kind of social contract for group mutual gain, seems to account for the vast bulk of what people consider to be "moral" actions, as well as the conflict between private individual desires vs. actions that are "right". (People who break moral taboos are viewed not much differently from traitors in wartime, who betray their team/side/cause.)

I don't understand this series. Eliezer is writing multiple posts about the problems with the metatheories of morality as either preferences or given. Sure, both those metatheories are wrong. Is that really so interesting? Why not start to tackle what morality actually is, rather than merely what it is not?

I think it's probably useful to taboo the word "should" for this discussion. I think when people say you "should" do X rather than Y it means something like "experience indicates X is more likely to lead to a good outcome than Y". People tend to have rule-based rather than consequence based moral systems because the full consequences of one's actions are unforeseeable. A rule like "one shouldn't lie" comes about because experience has shown that lying often has negative consequences for the speaker and listener and possibly others as well, although the particular consequences of a particular lie may be unforeseeable.

I don't see how there can be agreement as to moral principles unless there is first a reasonably good agreement as to what constitutes good and bad final states.

Relationships are real. For example if a plant is "under" a table, that is a fact, not a subjective whim of the observer. So if morality is a relationship, then aliens and man can have different moralities but both be objective, not subjective. The relationship would be between the object sought and the entity seeking it, e.g. murder + man = bad, murder + alien = good.

Paul Gowder,

Yes, there are possible minds that do math/logic/deduction differently. Most of these logically possible minds perform even worse than humans in these aspects, and would die out.

In this universe, if one wishes to reach ones goals, one has to choose to (try to) do math/logic/deduction in the correct way; the way that delivers results. What works is determined by the laws of physics and logic that in our universe seem quite coherent and understandable (to a degree, at least).

There's no reason to be skeptical about whether I actually have some goals/preferences. And since I assume that I have some preferences, I have a need to conform to the correct way of doing math/logic/deduction, which is determined by what seems a rather coherent physical universe.

Subhan's question here, "How does a universe in which murder is wrong, differ from a universe in which murder is right? How can you detect the difference experimentally?" is such a gem.

I wonder if Eliezer intended it as parody.


If somebody said to me "morality is just what we do." If they presented evidence that the whole apparatus of their moral philosophy was a coherent description of some subset of human psychology and sociology. Then that would be enough for me. It's just a description of a physical system. Human morality would be what human animals do. Moral responsibility wouldn't be problematic; moral responsibility could be as physical as gravity if it were psychologically and sociologically real. "I have a moral responsibility" would be akin to "I can lift 200 lbs." The brain is complicated, sure, but so are muscles and bones and motor control. That wouldn't make it a preference or a mere want either. That's probably where we're headed. But I don't think metaethics is the interesting problem. The deeper problem is, I think, the empirical one: Do humans really display this sort of morality?

I've thought about Space Cannibals and the like before (i.e. creatures that kill one of the sexes during sexual reproduction). My suspicion is that even if such creatures evolved and survived, by the time they had a civilization, many would be saying to one another, "There really should be a better way..."

Evidence for this is the fact that even now, there are many human beings claiming it is wrong to kill other animals, despite the fact that humans evolved to kill and eat other animals. Likewise, in the ancestral environment, various tribes usually did kill each other rather than cooperate. But this didn't stop them from beginning to cooperate. So I suspect that Space Cannibals would do something similar. And in any case, I would fully admit that murder couldn't in fact be wrong for the Space Cannibals in the same way it is for us, even if there is an external moral truth.

In answer to Robin's question, assuming that morality exists, it probably has a number of purposes. And if one of the purposes is to preserve things in existence (i.e. moral truths correspond roughly with what is necessary to preserve things), then of course there will be a selection pressure to perceive moral truth. The disclaimer should not be needed, but this is not in any way a claim that it is moral to maximize inclusive genetic fitness.

Robin,
As Eliezer has pointed out, evolution is a nonhuman optimizer which is in many ways more powerful than the human mind. On the assumption that humans have a moral sense, I don't think we should expect to be able to understand why. That might simply be a problem which is too difficult for people to solve.
That aside, a man's virtues benefit the society he lives in; his inclination to punish sin will encourage others to act virtuously as well. If his society is a small tribe of his relatives, then even the weaker forms of kin selection theory can explain the benefit of knowledge of good and evil.

Morality debates irritate me on so many levels. Treating everybody with respect seems to be a good solution for the moral reletivism debate.

Treating those who do not deserve respect with respect is basically spitting on those who do deserve it, especially those who work hard for it. I think you need to treat those you don't know with the "presumption of respect"; that is, if you don't know that they don't deserve it, assume they do. Borrowed from Smith's "presumption of rationality"; when you argue with someone, assume that they are rational until they demonstrate otherwise.

Richard, would you accept the same argument about God, that we know there is a God but don't really understand how we know, but gosh darn it we feel like there must be one so there must be one? Yes we evolved to help kin, and we expect many but hardly all other species to do this as well. But unless we know whether that behavior is moral we don't know if that is a process that makes our moral intuitions correlate with moral truth.

Richard, we can understand how there would be evolutionary pressure to produce an ability to see light, even if imperfect. But what possible pressure could produce an ability to see morality?

Let's detail the explanation for light to see if we can find a parallel explanation for morality. Brief explanation for light: light bounces off things in the environment in a way which can in principle be used to draw correct inferences about distant objects in the environment. Eventually, some animals evolve a mechanism for doing just this.

Let's attempt the same for morality. Brief explanation for morality: unlike light, evil is not a simple thing that comes in its own fundamental particles. It is more similar to illness. An alien looking at a human cell might not, from first principles, be able to tell whether the cell was healthy or sick - e.g. whether it has not, or has, fallen victim to an attack rewriting its genetic code. The alien may need to look at the wider context in order to draw a distinction between a healthy cell and an ill cell, and by extension, between a healthy human and an ill human. Nevertheless, illness is real and we are able to tell the difference between illness and health. We have at least two reasons for doing this: an illness might pass to us (if it is infectious), and if we select an ill partner for producing offspring we may produce no offspring.

Evil is more akin to illness than to light, and is even more akin to mental illness. Just to continue the case of mating, if we select a partner who is unusually capable of evil (as compared to the human average) then we may find ourselves dead, or harmed, or at odds with our neighbors who are victimized by our partner. If we select a business partner who is honest then we have an advantage over someone who selects a business partner who is dishonest. In order to tell apart an evil person from a good person we need to be able to distinguish an evil act from a good act.

This is only part of it, but there's a 400-word limit.

Robin,

Our moral intuitions correspond with moral truths for much the same reason that our rational predictions correspond with more concrete physical truths. A man who ignores reason will stick his hand back in the fire after being burned the first time. Such behavior will kill him, probably sooner rather than later. An man who is blind to good and evil may do quite well for himself, but a society whose citizens ignore virtue will suffer approximately the same fate as the twice-burned fool.

Richard, I agree that some social norms help a society prosper while others can "burn" it. And we have the intuition that morally right acts correspond to social norms that help societies prosper. But we would have had that intuition even if morally right acts had corresponded to the opposite. What evolutionary pressure could have produced the correct intuitions about this meta question?

Constant, I would say that objective illness is just as problematic as objective morality; it's just less obviously problematic because in everyday contexts, we're more used to dealing with disputes about morality than about illness. You mention that "if we select an ill partner for producing offspring we may produce no offspring," and in an evolutionary context, probably we could give some fitness-based account of illness. However, this evolutionary concept of "illness" cannot be the ordinary meaning of the word, because no one actually cares about fitness.

I hate to use this example ("gender is the mind-killer," as we learned here so recently), but it's a classic one and a good one, so I'll just go ahead. Take homosexuality. It's often considered a mental disorder, but if someone is gay and happy being so, I would challenge (as evil, even) any attempt to define them as "ill" in anything more than the irrelevant evolutionary sense. I would indeed go much further and say that (for adults, at least) that which the patient desires in herself is health, and that which the patient does not desire in herself is sickness. (I actually seem to remember a similar viewpoint being advanced on The Distributed Republic a while back.) But this only puts us back at discussing preferences and morality.

Constant wrote: So one place where one could critique your argument is in the bit that goes: "conditioned on X being the case, then our beliefs are independent of Y". The critique is that X may in fact be a consequence of Y, in which case X is itself not independent of Y.

Good point, my argument did leave that possibility open. But, it seems pretty obvious, at least to me, that game theory, evolutionary psychology, and memetics are not contingent on anything except mathematics and the environment that we happened to evolve in.

So if I were to draw a Bayesian net diagram, it would look it this:


math --- --- game theory ------------
\ / \
--- evolutionary psychology - moral perceptions
/ \ /
environment -- --- memetics ---------------

Ok, one could argue that each node in this diagram actually represents thousands of nodes in the real Bayesian net, and each edges is actually millions of edges. So perhaps the following could represent a simplification, for a suitable choice of "morality":

math --- - game theory ------------
\ / \
-- morality -- evolutionary psychology --- moral perceptions
/ \ /
environment -- - memetics ---------------

Before I go on, do you actually believe this to be the case?

I wonder if Eliezer intended it as parody.
He'd be making a serious mistake if so.

Eliezer: You have perhaps already considered this, but I think it would be helpful to learn some lessons from E-Prime when discussing this topic. E-Prime is a subset of English that bans most varieties of the verb "to be".

I find sentences like "murder is wrong" particularly underspecified and confusing. Just what, exactly, is meant by "is", and "wrong"? It seems like agreeing on a definition for "murder" is the easy part.

It seems the ultimate confusion here is that we are talking about instrumental values (should I open the car door?) before agreeing on terminal values (am I going to the store?).

If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.

It is, however, much harder to talk about murders in general, and infeasible to discuss this unless we have agreed on a terminal value to work for.

My earlier comment is not to imply that I think "maximization of human happiness" is the most preferred goal.

An easily obvious one, yes. But faulty; "human" is a severely underspecified term.

In fact, I think that putting in place a One True Global Goal would require ultimate knowledge about the nature of being, to which we do not have access currently.

Possibly, the best we can do is come up with plausible global goal that suits us for medium run, while we try to find out more.

That is, after all, what we have always done as human beings.

Wanting to murder doesn't make it right. Nothing makes anything morally right.

Robin,

I don't understand your counterfactual.

"Good" and "Evil" are the names for what people perceive with their moral sense. I think we've agreed that this perception correlates to something universally observable (namely, social survival), so these labels are firmly anchored in the physical world. It looks to me like you're trying to assign these names to something else altogether (namely, something which does not correlate with human moral intuitions), and it's not clear to me how this makes sense.

Richard, if morality just meant social norms that help societies prosper, then of course we have little problem understanding how the two could be correlated, and how we could come to know about them. But if morality means something else, then we face the much harder question of how it is we could know about this something else.

For those impatient to know where Eliezer is going with this series, it looks like he gaves us a sneak preview a little more than a year ago. The answer is morality-as-computation.

Eliezer, hope I didn't upset your plans by giving out the ending too early. When you do get to morality-as-computation, can you please explain what exactly is being computed by morality? You already told us what the outputs look like: "Killing is wrong" and "Flowers are beautiful", but what are the inputs?

EY: "human cognitive psychology has not had time to change evolutionarily over that period"

Under selective pressures, human populations can and have significantly changed in less than two thousand years. Various behavioral traits are highly heritable. Genghis Khan spread his behavioral genotype throughout Asia. (For this discussion this is a nitpick but I dislike seeing false memes spread.)

re: FAI and morality

From my perspective morality is a collection of rules that make cooperative behavior beneficial. There are some rules that should apply to any entities that compete for resources or can cooperate for mutual benefit. There are some rules that improved fitness in our animal predecessors and have become embedded in the brain structure of the typical human. There are some rules that are culture specific and change rapidly as the environment changes. (When your own children are likely to die of starvation, your society is much less concerned about children starving in distant lands. Much of modern Western morality is an outcome of the present wealth and security of Western nations.)

As a start I suggest that a FAI should first discover those three types of rules, including how the rules vary among different animals and different cultures. (This would be an ongoing analysis that would evolve as the FAI capabilities increased.) For cultural rules, the FAI would look for a subset of rules that permit different cultures to interact and prosper. Rules such as kill all strangers would be discarded. Rules such as forgive all trespasses would be discarded as they don't permit defense against aggressive memes. A modified form of tit-for-tat might emerge. Some punishment, some forgiveness, recognition that bad events happen with no one to blame, some allowance for misunderstandings, some allowance for penance or regret, some tolerance for diversity. Another good rule might be to provide everyone with a potential path to a better existence, i.e., use carrots as well as sticks. Look for a consistent set of cultural rules that furthers happiness, diversity, sustainability, growth, and increased prosperity. Look for rules that are robust, i.e., give acceptable results under a variety of societal environments.

A similar analysis of animal morality would produce another set of rules. As would an analysis of rules for transactions between any entities. The FAI would then use a weighted sum of the three types of moral rules. The weights would change as society changed, i.e., when most of society consists of humans then human culture rules would be given the greatest weight. The FAI would plan for future changes in society by choosing rules that permit a smooth transition from a human centered society to an enhanced human plus AI society and then finally to an AI with human origins future.

Humans might only understand the rules that applied to humans. The FAI would enforce a different subset of rules for non-human biological entities and another subset for AI's. Other rules would guide interactions between different types of entities. (My mental model is of a body made up of cells, each expressing proteins in a manner appropriate for the specific tissue while contributing to and benefitting from the complete animal system. Rules for each specific cell type and rules for cells interacting.)

The transition shouldn't feel too bad to the citizens at any stage and the FAI wouldn't be locked into an outdated morality. We might not recognize or like our children but at least we wouldn't feel our throats being cut.

Robin,

I don't know how people are capable of discerning moral truths. I also don't know how people are capable of discerning scientific or mathematical truths. It seems to me that these are similar capabilities, and the one is no more suprising or unlikely than the other.

Richard, while there are surely many details we would like to understand better, surely we understand the basic outline of how we discern scientific and mathematical truths. For example, in math we use contradiction to eliminate possible implications of axiom sets, and in science we use empirical results to eliminate possible abstract theories. We have nothing remotely similar in morals. You never said whether you approved of a similar argument about knowledge of God.

Z. M. Davis writes: ... objective illness is just as problematic as objective morality

I would argue that to answer Robin's challenge is not necessarily to assert that there is such a thing as objective illness.

Accounts have been given of the pressure producing the ability to see beauty (google sexual selection or see e.g. this). This does not require that there is some eternal beauty written in the fabric of the universe - it may be, for example, that each species has evolved its own standard of beauty, and that selection is operating on both sides, i.e., selecting against individuals who are insufficiently beautiful and also selecting against admirers who differ too far from the norm.

However, this evolutionary concept of "illness" cannot be the ordinary meaning of the word, because no one actually cares about fitness.

My argument is: people can distinguish illness because it enhances their fitness to do so. Compare this to the following argument: people can distinguish the opposite sex because it enhances their fitness to do so. Now, okay, suppose that people don't care about fitness, as you say. Nevertheless, unbeknownst to them, telling women apart from men enhances their fitness. Similarly for illness.

Take homosexuality. It's often considered a mental disorder, but if someone is gay and happy being so, I would challenge (as evil, even) any attempt to define them as "ill" in anything more than the irrelevant evolutionary sense.

Homosexuality reduces fitness (so you seem to to agree), but this does not make it an illness. Not everything that reduces fitness is an illness. Rather, illness tends to reduce fitness. Let me put it this way. Blindness tends to reduce fitness. But not everything that reduces fitness is blindness. Similarly, illness tends to reduce fitness. But that doesn't mean that everything that reduces fitness is illness.

... that which the patient desires in herself is health, and that which the patient does not desire in herself is sickness.

We can similarly say, that which a person desires in a mate is beauty. However, I think the most that can be said for this is that it is one concept of beauty. It is not the only concept. The idea that there is a shared standard of beauty is, despite much thought and argument to the contrary, still with us, and not illegitimate.


"what possible pressure could produce an ability to see morality?"

Unlike the other Richard, I don't think we "see" morality with a special "sense", or anything like that. But if we instead understand morality as a rational idealization, building on our perfectly ordinary general capacity for systematizing judgments so as to increase their overall coherence (treating like cases alike, etc.), then there's no great mystery here.

Dynamically Linked writes: But, it seems pretty obvious, at least to me, that game theory, evolutionary psychology, and memetics are not contingent on anything except mathematics and the environment that we happened to evolve in.

According to Tegmark "there is only mathematics; that is all that exists". Suppose he is right. Then moral truths, if there are any, are (along with all other truths) mathematical truths. Unless you presuppose that moral truths cannot be mathematical truths then you have not ruled out moral truths when you say that so-and-so is not contingent on anything except mathematics and such-and-such. For my part I fail to see why moral truths could not be mathematical truths.

Before I go on, do you actually believe this [Bayesian net diagram] to be the case?

I'm sorry to say that I can't read Bayesian net diagrams. Hopefully I answered your question anyway.

Robin:

Discarding false mathematical and scientific conjectures is indeed much easier than discarding false moral conjectures. However, as Eliezer pointed out in an earlier post, a scientist who can come up with a hypothesis that has a 10% chance of being true has already gone most of the way from ignorance to knowledge. I would argue that hypothesis generation is a poorly-understood nonrational process in all three cases. A mathematician who believes he has found truth can undertake the further steps of writing a formal proof and submitting his work to public review, greatly improving his reliability. A man confronted with a moral dilemma must make a decision and move on.

I think that the universal tendency towards religion is indeed evidence in favor of the existence of God, but not very strong evidence. The adaptive advantage of discerning correct metaphysics was minimal in the ancestral environment.

Richard C:

I think if you try to use your "general capacity for systematizing judgments" to make moral decisions, you'll restrict yourself to moral systems which are fully accessible to human reason.

If I were leading this discussion instead of Eliezer, a few days ago I would have pointed out that although a human and an AGI both have a system of terminal values, a.k.a., a goal system, it is easier to understand and to get right the the goal system of an AGI than of a human.

One reason for that is that an AGI will probably have the transparency property. A second reason values-for-humans is harder than values-for-AGI is that a well-designed AGI will be vastly less biased than the least-biased human beings. It seems to me that a large part of the reason human ethical codes are the way they are is the need to try to neutralize and
counteract common human biases.

Then I would have told the audience that although comments about terminal values for humans is not completely off-topic, I am particularly interested in the problem of how a creator of an AGI picks the goal system of the AGI.

Then I would have explained monolithic AGI: it might be the case that the future light cone will be inhabited by a single AGI rather than a "society" of individuals. Although this monolithic AGI might decide to create a vehicle and send it to the moon, that lunar vehicle will with extremely high probability never compete with its creator (because the monolithic entity is extremely good at designing smart lunar vehicles and other intelligent artifacts that do not come to have their own agendas).

I would have done that just to spare myself from reading many comments to effect that the purpose of morality is to enable individuals to live and to work together effectively. That is a useful observation for explaining how human morality got the way it is, but IMHO not useful for the difficult problem of choosing a goal system for an AGI that might become smarter than us.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31