« 2 of 10, not 3 total | Main | Rah My Country »

July 04, 2008

Comments

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?
Because they believe the answer to "is it right that I want the pie?" isn't always "yes."

Is it ok to mash the two options together? I'd take the position that morality is about what people want, but that since it is about something that is real (wants) and thus objective/quantifiable/etc you can make statements about these real things that are actually true or false and not subject to whims.

To take a stab at a few of these...

Some terminal values can't be changed (or only very slightly), they are the ones we are born with. Aversion to pain, desire for sex, etc. The more maleable ones that can be changed are never changed through logic or reasoning. They are changed through things like praise, rewards, condemnation, punishments. I'm not sure if it's possible for people to change their own maleable terminal values. But people can change other's maleable terminal values (and likewise, have their own terminal values changed by others) through such methods. Obviously this is much easier to do very early in life.

I'd also like to propose that all terminal values can also be viewed as instrumental values on their tendency to help fulfill or prevent the realization of other values. "Staying alive", for example.

Moral progress is made by empirical observation of what desires/aversions have the greatest tendency to fulfill other desires, and then by strengthening these by the social tools mentioned above.

You can very easily want to change your desires when several of your desires are in conflict. I have a desire to inhale nicotine, and a desire to not get lung cancer, and I realize these two are at odds. I'd much prefer to not have the first desire. If one of your wants has significant consequences (loss of friends, shunning by your family) then you often would really like that want to change.

"Doing something they shouldn't" or "wanting something they know is wrong" are demonstrations of the fact that all entities have many desires, and sometimes these desires are in conflict. A husband might want to have an extra-marital affair due to a desire for multiple sexual partners, and yet "know it's wrong" due to an aversion to hurting his wife, or losing his social status, or alienating his children, or various other reasons.

Am I not allowed to construct an alien mind that evaluates morality differently? What will stop me from doing so?

Can you elaborate on this? You seem to be using "allowed" in a strange way. If you have the means to do this, and others lack the means to physically restrain you from doing so, then the only thing that would stop you would be your own desires and aversions.

I don't think you're talking about my sort of view* when you say "morality-as-preference", but:

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?

A commitment to drive a hard bargain makes it more costly for other people to try to get you to agree to something else. Obviously an even division is a Schelling point as well (which makes a commitment to it more credible than a commitment to an arbitrary division).

When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?

I think humans tend not to have very clean divisions between instrumental and terminal values. Although there is no absolute moral progress or error, some moralities may be better or worse than others by almost any moral standard a human would be likely to use. Through moral hypocrisy, humans can signal loyalty to group values while disobeying them. Since humans don't self modify easily, a genuine desire to want to change may be a cost-effective way to improve the effectiveness of this strategy.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?

See above on signaling and hypocrisy.

*moral nihilist with instrumental view of morality as tool for coordinating behaviour.

It seems to me that the problems of morality as preference come from treating humans as monolithic. Real humans with internal complex ecosystems of agents embedded in larger social systems of agents. As such, they would be expected to shift dominant values, maybe even to shift terminal values as an agent builds a new agent to accomplish its goals by optimizing for some proxy to its goals and eventually finds that new agent to pursue that proxy against its explicit preferences. Plato viewed morality as well ordered relationships between agents, that is, presumably some sort of attractor in the space of possible such relationships which leads to most of the reasonably high level agents flourishing in the medium term and the very high level ones flourishing in the long term.

Consistent with the above, morality as a given can simply be part of the universe or multiverse as a given, but this is hard to express. It is a given that certain configurations *are* perceptions of moral "wrongness" and others *are* perceptions of moral "rightness".

I think the answer (to why this behavior adds up to normality) is in the spectrum of semantics of knowledge that people operate with. Some knowledge is primarily perception, and reflects what is clearly possible or what clearly already is. Other kind of "knowledge" is about goals: it reflects what states of environment are desirable, and not necessarily which states are in fact possible. These concepts drive the behavior, each pushing in its own direction: perception shows what is possible, goals show where to steer the boat. But if these concepts have similar implementation and many intermediate grades, it would explain the resulting confusion: some of the concepts (subgoals) start to indicate things that are somewhat desirable and maybe possible, and so on.

In the case of moral argument, what a person wants corresponds to pure goals and has little feasibility part in it ("I want to get the whole pie"). "What is morally right" adds a measure of feasibility, since such question is posed in the context of many people participating at the same time, so since everyone getting the whole pie is not feasible, it is not in answer in that case. Each person is a goal-directed agent, operating towards certain a-priory infeasible goals, plotting feasible plants towards them. In the context of society, these plans are developed so as to satisfy the real-world constraints that it imposes.

Thus, "morally right" behavior is not the content of goal-for-society, it is an adapted action plan of individual agents towards their own infeasible-here goals. How to formulate the goal-for-society, I don't know, but it seems to have little to do with what presently forms as morally right behavior. It would need to be derived from goals of individual agents somehow.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?

It's all about delicious versus nutritious. That is, these conflicts are conflicts between different time horizons, or different discount values for future costs and benefits. Evolution has shaped our time horizon for making relatively short term decisions (Eat the pie now. It will taste good. There may not be another chance.), but we live in a world where a longer term is more appropriate (The pie may taste good, but it isn't good for my health. Also, I may benefit in the long term by giving the pie to somebody else.).

All good questions.

Try replacing every instance of 'morality' with 'logic' (or 'epistemic normativity' more broadly). Sure, you could create a mind (of sorts) that evaluated these things differently -- that thought hypocrisy was a virtue, and that contradictions warranted belief -- but that's just to say that you can create an irrational mind.

I share neither of those intuitions. Why not stick with the obvious option of morality as the set of evolved (and evolving) norms? This *is* it, looking for the "ideal" morality would be passing the recursive buck.

This does not compel me to abandon the notion of moral progress though; one of our deepest moral intuitions is that our morality should be (internally) consistent, and moral progress, in my view, consists of better reasoning to make our morality more and more consistent.

"moral progress, in my view, consists of better reasoning to make our morality more and more consistent"

Right, so morality is not our [actual, presently existing] "set of evolved norms" at all, but rather the [hypothetical, idealized] end-point of this process of rational refinement.

The questions posed by Eliezer are good but elementary. Since there is an entire class of people--moral philosophers--who have been professionally debating and (arguably) making progress on these issues for centuries, why do we believe that we can make much progress in this forum?

I claim that it is highly unlikely that anyone here has an exceptional insight (because of Bayesianism or whatever) that could cause a rational person to assign appreciable importance to this discussion for the purposes of forming moral beliefs. In other words, if we want to improve our moral beliefs, shouldn't we all just grab a textbook on introductory moral philosophy?

Or is this discussion merely an exercise?

There's at least one other intuition about the nature of morality to distinguish from the as-preference and as-given ideas. It's the view that there are only moral emotions - guilt, anger and so on - plus the situations that cause those emotions in different people. That's it. Morality on this view might profitably be compared with something like humour. Certain things cause amusement in certain people, and it's an objective fact that they do. At the same time, if two people fail to find the same thing funny, there wouldn't normally be any question of one of them failing to perceive some public feature of the world. And like the moral emotions, amusement is sui generis - it isn't reducible to preference, though it may often coincide with it. The idea of being either a realist or a reductionist about humour seems, I think, absurd. Why shouldn't the same go for morality?

@Richard
I agree with you, of course. I meant there exists no objective, built-into-the-fabric-of-the-universe morality which we can compute using an idealised philosopher program (without programming in our own intuitions that is).

Jess - "shouldn't we all just grab a textbook on introductory moral philosophy?"

That would seem ideal. I'd recommend James Rachels' The Elements of Moral Philosophy for a very engaging and easy-to-read introductory text. Though I take it Eliezer is here more interested in meta-ethics than first-order moral inquiry. As always, the Stanford Encyclopedia of Philosophy is a good place to start (then follow up Gibbard and Railton, especially, in the bibliography).

On the other hand, one shouldn't let the perfect be the enemy of good discussion. Better to reinvent the wheel than to go without entirely!

My response to these questions is simply this: Once the neurobiology, sociology and economics is in, these questions will either turn out to have answers or to be the wrong questions (the latter possibility being the much more probable outcome). The only one I know how to answer is the following:

Do the concepts of "moral error" and "moral progress" have referents?

The answer being: Probably not. Reality doesn't much care for our ways of speaking.

A longer (more speculative) answer: The situation changes and we come up with a moral story to explain that change in heroic terms. I think there's evidence that most "moral" differences between countries, for example, are actually economic differences. When a society reaches a certain level of economic development the extended family becomes less important, controlling women becomes less important, religion becomes less important, and there is movement towards what we consider "liberal values." Some parts of society, depending on their internal dynamics and power structure, react negatively to liberalization and adopt reactionary values. Governments tend to be exploitative when a society is underdeveloped, because the people don't have much else to offer, but become less exploitative in productive societies because maintaining growth has greater benefits. Changes to lesser moral attitudes, such as notions of what is polite or fair, are usually driven by the dynamics of interacting societies (most countries are currently pushed to adopt Western attitudes) or certain attitudes becoming redundant as society changes for other reasons.

I don't give much weight to peoples' explanations as to why these changes happen ("moral progress"). Moral explanations are mostly confabulation. So the story that we have of moral progress, I maintain, is not true. You can try to find something else and call it "moral progress." I might argue that people are happier in South Korea than North Korea and that's probably true. But to make it a general rule would be difficult: baseline happiness changes. Most Saudi Arabian women would probably feel uncomfortable if they were forced to go out "uncovered." I don't think moral stories can be easily redeemed in terms of harm or happiness. At a more basic level, happiness just isn't the sort of thing most moral philosophers take it to be, it's not something I can accumulate and it doesn't respond in the ways we want it too. It's transient and it doesn't track supposed moral harm very well (the average middle-class Chinese is probably more traumatized when their car won't start than they are by the political oppression they supposedly suffer). Other approaches to redeeming the kinds of moral stories we tell are similarly flawed.

I think I'm echoing Eneasz when I ask: how does Preference Utilitarianism fit into this scheme? In some sense, it's taken as given that the aim is to satisfy people's preferences, whatever those are. So which type of morality is it?

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?

These really are different statements. "I am entitled to fraction x of the pie" means more or less the same as "a fair judge would assign me fraction x of the pie".

But a fair judge just means the judge has no personal relationship with any of the disputing parties and makes his decision based on some rational process, not arbitrarily. It isn't necessarily true that there's a unique solution that a fair judge would decide upon. One could say that whoever saw it first or touched it first is entitled to the whole pie, or that it should be divided strictly equally, or that it be divided on a need-based or merit-based, or he could even make the gods must be crazy/idiocy of Solomon solution and say it's better that the pie be destroyed than allowed to exist as a source of dissent. In my (admittedly spotty) knowledge of anthropology, in most traditional pie-gathering societies, if three members of a tribe found a particularly large and choice pie they would be expected to share it with the rest of the tribe, but they would have a great deal of discretion as to how the pie was divided, they'd keep most of it for themselves and their allies.

This is not to say that morality is nothing but arbitrary social convention. Some sets of rules will lead to outcomes that nearly everyone would agree are better than others. But there's no particular reason to believe that there could be rules that everyone will agree on, particularly not if they have to agree on those rules after the fact.

Poke - "most 'moral' differences between countries, for example, are actually economic differences"

I'd state that slightly differently: not that moral differences just are economic differences (they could conceivably come apart, after all), but rather, moral progress is typically caused by economic progress (or, even more likely, they are mutually reinforcing). In other words: you can believe in the possibility of moral progress, i.e. of changes that are morally better rather than worse, without buying into any particular explanatory story about why this came to be.

(Compare: "Most 'height' differences between generations... are actually nutritional differences." The fact that we now eat better doesn't undo the fact that we are now taller than our grandparents' generation. It explains it.)

Richard: I would say that moral 'progress' is caused by economics as well, but in a complex manner. Historically, in Western Civilization, possibly due to the verbalized moral norm "do onto others as you would have others do onto you" plus certain less articulate ideas of justice as freedom of conscience, truth, and vaguely 'equality', there is a positive feedback cycle between moral and economic 'progress'. We could call this "true moral progress".

However, the basic drive comes from increased wealth driving increased consumption of the luxury "non-hypocrisy", which surprisingly turns out to be an unrecognized factor of production. Economic development can cause societies with other verbalized governing norms to travel deeper into the "moral abyss", e.g. move away from the attractor that Western Civilization moves towards instead. Usually, this movement produces negative feedback, as it chokes off economic progress, which happens to benefit from movement towards Western moral norms within a large region of possibility space stretching out from the evolutionary psychology emergent default.

In rare cases however, it may be possible for positive feedback to drive a culture parasitically down into the depths of the "moral abyss". This could happen if a cultural discovers a road to riches in the form of decreased production, which is possible if that culture is embedded in an international trade network and highly specialized in the production of a good with inelastic supply. In this case, the productivity losses that flow from moral reform can serve as a form of collusion to reduce production driving up price.

vasser, you're conflating 'morality' both with non-hypocrisy AND some vaguely-alluded-to social interaction preferences.

Having enough wealth to be able to afford to abolish class divisions only permits non-hypocrisy if you've been proclaiming that class division should be abolished. You seem to be confusing certain societal political premises with 'morality', then calling the implementation of those premises 'moral progress'.

I fall closer to the morality-as-preference camp, although I'd add two major caveats.

One is that some of these preferences are deeply programmed into the human brain (i.e. "Punish the cheater" can be found in other primates too), as instincts which give us a qualitatively different emotional response than the instincts for direct satisfaction of our desires. The fact that these instincts feel different from (say) hunger or sexual desire goes a long way towards answering your first question for me. A moral impulse feels more like a perception of an external reality than a statement of a personal preference, so we treat it differently in argument.

The second caveat is that because these feel like perceptions, humans of all times and places have put much effort into trying to reconcile these moral impulses into a coherent perception of an objective moral order, denying some impulses where they conflict and manufacturing moral feeling in cases where we "should" feel it for consistency's sake. The brain is plastic enough that we can in fact do this to a surprising extent. Now, some reconciliations clearly work better than others from an interior standpoint (i.e. they cause less anguish and cognitive dissonance in the moral agent). This partially answers the second question about moral progress— the act of moving from one attempted framework to one that feels more coherent with one's stronger moral impulses and with one's reasoning.

And for the last question, the moral impulses are strong instincts, but sometimes others are stronger; and then we feel the conflict as "doing what we shouldn't".

That's where I stand for now. I'm interested to see your interpretation.

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?

"I want the pie" is something that nobody else is affected by and thus nobody else has an interest in. "I should get the pie" is something that anybody else interested in the pie has an interest in. In this sense, the moral preferences are those that other moral beings have a stake in, those that affect other moral beings. I think some kind of a distinction like this explains the different ways we talk about and argue these two kinds of preferences. Additionally, evolution has most likely given us a pre-configured and optimized module for dealing with classes of problems involving other beings that were especially important in the environment of evolutionary adaptedness, which subjectively "feels" like an objective morality that is written into the fabric of the universe.

When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?

I think of preferences and values as being part of something like a complex system (in the sense of http://en.wikipedia.org/wiki/Complex_system) in which all the various preferences are inter-related and in constant interaction. There may be something like a messy, tangled hierarchy where we have terminal preferences that are initially hardwired at a very low-level, on top of which are higher-level non-terminal preferences, with something akin to back-propagation allowing for non-terminal preferences to affect the low-level terminal preferences. Some preferences are so general that they are in constant interaction with a very large subset of all the preferences; these are experienced as things that are "core to our being", and we are much more likely to call these "values" rather than "preferences", although preferences and values are not different in kind.

I think of moral error as actions that go against the terminal (and closely associated non-terminal (which feedback to terminal)) and most general values (involving other moral beings) of a large class of human beings (either directly via this particular instance of the error affecting me or indirectly via contemplation of this type of moral error becoming widespread and affecting me in the future). I think of moral progress as changes to core values that result in more human beings having their fundamental values (like fairness, purpose, social harmony) flourish more frequently and more completely rather than be thwarted.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"?

Because the system of interdependent values is not a static system and it is not a consistent system either. We have some fundamental values that are in conflict with each other at certain times and in certain circumstances, like self-interest and social harmony. Depending on all the other values and their interdependencies, sometimes one will win out, and sometimes the other will win out. Guilt is a function of recognizing that something we have done has thwarted one of our own fundamental values (but satisfied the others that won out in this instance) and thwarted some fundamental values of other beings too (not thwarting the fundamental values of others is another of our fundamental values). The messiness of the system (and the fact that it is not consistent) dooms any attempt by philosophers to come up with a moral system that is logical and always "says what we want it to say".

Does the notion of morality-as-preference really add up to moral normality?

I think it does add up to moral normality in the sense that our actions and interactions will generally be in accordance with what we think of as moral normality, even if the (ultimate) justifications and the bedrock that underlies the system as a whole are wildly different. Fundamental to what I think of as "moral normality" is the idea that something other than human beings supplies the moral criterion, whereas under the morality-as-preference view as I described it above, all we can say is that *IF* you desire to have your most fundamental values flourish (and you are a statistically average human in terms of your fundamental values including things like social harmony), *THEN* a system that provides for the simultaneous flourishing of other beings' fundamental values is the most effective way of accomplishing that. It is a fact that most people *DO* have these similar fundamental values, but there is no objective criterion from the side of reality itself that says all beings MUST have the desire to have their most fundamental values flourish (or that the fundamental values we do have are the "officially sanctioned" ones). It's just an empirical fact of the way that human beings are (and probably many other classes of beings that were subject to similar pressures).

These are difficult questions, but I think I can tackle some of them:

Why would anyone want to change what they want?"

If a person wants to change from valuing A to valuing B, they are simply saying that they value B, but it requires short-term sacrifices, and in the short term, valuing A may feel psychologically easier, even though it sacrifices B. They thus want to value B so that it is psychologically easier to make the tradeoff.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"?

That's recognizing that they are violating an implicit understanding with others that they would not want others to do to them, and would perhaps hope others don't find out about. They are also feeling a measure of psychological pain from doing so as a result of their empathy with others.

People quite often hold contradictory positions simultaneously. there not an incentive for people to be entirely consistent - in fact it is prohibitively expensive in a psychological sense. (eg voters). It would be even easy for the inconsistencies to occur between what you choose to do and what you think you should do.

Am I not allowed to construct an alien mind that evaluates morality differently? What will stop me from doing so?

No. Morality (and their rules that promote intra-group strength) is a almost mathematical consequence of evolutionary game theory applied to the real world of social animals in the context of darwinian evolution.

The outcome of the application of EGT to the nature is what is called "Multilevel selection theory" (Sloan Wilson, E O Wilson). This theory hat has a "one foot" description: within groups, selfish individals prevail over selfless ones, between groups, the group with selfless indifiduals prevais over the ones with selfish individuals.

This is the essence of our evolved moral judgements, moral rules, and of all our internal and external moral conflicts.

http://ilevolucionista.blogspot.com/2008/05/entrevista-david-sloan-wilson.html

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31