« A New Day | Main | Free to Optimize »

January 01, 2009

Comments

Eliezer, there are various ways to axiomatize expected utility theory, and even if someone who would never ever lie violates some axiom sets, there are slight variations which this does not violate, and it certainly seems to me pretty close to standard expected utility to assign "infinite" relative value to some factors. And in fact virtually no one who advocates not lying really thinks no other considerations could ever weigh in favor of a lie.

Pablo, I don't see why a choice being for prudential or well-being effects whether it satisfies expected utility axioms.

jb: It's a model of how you should reason and act when you face fundamental moral uncertainty; it is not a recipe for Peace On Earth!

Phil: In your example, unless there is more structure and content than what you explicitly mention, it seems that m2 and m3 might well cooperate to vote A, and A being done all the time. Alternatively, they might make a bargain (and m1 might also bargain with them) and the result could be some pattern of alternation between A, B, and C, ideally with A occuring more frequently. - Which of these outcomes would ideally occur should depend on, e.g., how much each theory prefers one action over the other; and on whether a theory thinks it's most important that its preferred action is taken at least on some occasions and there are diminishing returns, etc.

Unknown: Deontologies tend to have problems handling probability well. I don't know exactly how deontologists should deal with this problem; but I am not prepared therefore to assign zero probability to such a broad class of moral theories with so many able proponents. I want a practically useable framework for thinking about fundamental moral uncertainty that is as accomodating as possible to the widest possible range of moral theories.

Yes, in principle you could achieve an effective ban on lying by assigning it a large negative finite utility and then sufficiently steeply discounting the utility of pie (other goods). However, this doesn't work for those moral theories that say that pie does not have such a diminishing marginal utility: you don't want to distort these theories by foisting upon them an assumption that they would reject - and which has practical ramifications in considering gambles with outcomes of pie, etc.

It seems to me that this question, as interpreted here, is really how to handle uncertainty over decision theory. How should you weigh your uncertainty over, for example, expected vs. non-expected utility approaches? Is there a most general approach of which all the others are special cases?

nb: "It's a model of how you should reason and act when you face fundamental moral uncertainty; it is not a recipe for Peace On Earth!"

I realize that... problems being that even an "internal" democracy is subject to the same constraints and limitations as any democracy. Furthermore the allocation algorithm, the aforementioned "oracle," in some sense plays the role of dictator under Arrow's paradox, albeit one that exists and influences only at the beginning of the decision process.

Such a system is surely easier to reason about than e.g. pure "internal" market mechanisms (which is what you suggest that you end up with post-allocation in the model you propose anyway --- i.e., you're merely constraining the initial allocation of resources across "participant" moral theories, presumably with respect to one issue.) The idea that such constituencies could "trade" future influence / votes ignores the fact that the oracle must originally allocate the population according to the "probability" of each moral framework *with respect to the single, original issue considered.* If the population is reallocated for each issue, bargaining and trading influence across issues becomes quite difficult or impossible to reason about.

Net of all this, I'm not convinced this is particularly any better or worse than any other framework for reasoning in the presence of (internally or otherwise) competing moral frameworks.

jb

As I think I already mentioned, if you use surreal numbers to represent utility, you don't need to do any discounting - since then you can use infinite (and infinitessimal) numbers - and they can represent the concept that no amount of A is worth B just fine. The need for surreal numbers in decision theory was established by Conway over three decades ago, in his study of the game of go.

For those asking about hyperreals, surreal numbers are a superset of those.

The need for surreal numbers in decision theory was established by Conway over three decades ago, in his study of the game of go.

You only get the surreal numbers out of the left-set/right-set construction if your set theory permits transfinite induction. Nothing in the study of finite games requires the surreal numbers.

Robin: There is a more general question of how to handle uncertainty over decision theory; I suspect that there the situation we find ourselves in is onboard Neurath's ship. But we can begin addressing some more limited problem, like moral uncertainty. We may then be able to apply the same framework somewhat more generally; for example, to some uncertainty over metaethics, or some possible trade-offs between prudence and morality, etc. Yes, it seems you can always move up a level and consider a higher-level uncertainty, such as uncertainty over whether this proposed framework is correct - and we might also need to acknowledge the possibility that some disagreements at this higher level might themselves involve broadly moral issues. At this point we are not clear about whether our framework itself constitutes a kind of metaethical or meta-meta ethical theory, or whether it falls entirely under the non-moral parts of rationality.

Tim: It's not so easy. Suppose you represent Kantianism as a consequentialist theory that assigns some negative infinite (perhaps surreal-valued) utility to any act of telling a lie. You then seem saddled with the implication that even if you assign Kantianism a mere 0.01% probability of being correct, it will still trump e.g. all finite utilitarian views even if you are virtually certain that some such view is correct. (Also, it is not clear that you would actually get the structure of Kantianism right in this way.)

Maybe I've missed an extensive discussion of the topic, but I, like David Jinkins, am confused by the concept of a probability distribution over moral frameworks, as the "prior" seems effectively immutable. What would Solomonoff induction applied to morality look like? Presumably still heavy on nihilism. Does the resulting prior+evolution resemble current human morality better than just nihilism+evolution? If not, why even use the term "probability"? Just shorthand for "element of a set of reals that sums to 1"?

David Jinkins hit the nail on the head.

(Bret)I think we should redefine "morality" to mean "practicality," and then assign a goal to this practicality, before we're talking about anything but contextless universals.

For example, morality is practicality with the goal of making us a stable, more intelligent species that colonizes at least 7 planets.
----
Thanks Brett. :s/morality/practicality/ makes the topic much more comprehensible. The colonization example is a good one. As Nick suggested it is quite possible to leave the goal in question abstract once 'morality' is given the meaning described here.

I was confused by the initial post since as I understand it 'Morality' is a quite different beast to what we've been talking about here. Rather more to do with a cooperation mechanism we adapt to raise our social status and the frequency of our carried alleles relative to the rest of our species. 'Moral overconfidence' made no sense.

Nick, in your critique, you combine utilities derived from different utility functions.

For me, that is simply an illegal operation - there is no general way of combining utility from different utility functions - as though utility were some kind of probability. Each utility function may have its own scale and units - you can't necessarily take utilities from different utility functions and combine them.

As far as it not being clear to you how a utilitiarian version of Kantianism would work: what exactly is the problem?

Utilitiarianism is like a Turing machine of moral systems - if a morality is computable, and finitely expressible, you can represent it by some function - which describes the action to be taken - a utility function. If a morality is not computable, or requires some infinite system to express it, in practice, other moral agents can't make much use it either. [repost, due to Typepad mangling]

Build the model using fuzzy logic

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31