« A New Day | Main | Free to Optimize »

January 01, 2009

Comments

Are you actually trying to resolve moral uncertainty, as in move towards moral certainty? If so I don't think this works at all since no Total Utilitarian is going to see the results of this and decide he was wrong about whatever he didn't get his way on. At least presuming the Parliment membership is made up of other humans.

He may be willing to live with the practical implications of the results, but if that is your goal then you are simply arguing for representative goverment.

Being either purely probabilistic or purely pluralistic seems mathematically sensible, but won't you get some self-contradictory results by pretending to be probabilistic, but not being so?

Wouldn't it be mathematically cleaner to give every theory amount of moral money proportional to its probability and letting them trade or auction on outcomes?

if you don't know which moral theory is correct?

You lost me here. How do you define "correct" moral theory? Is there something more to it than internal consistency?

>For example, suppose you give X% probability to total utilitarianism and (1-X)% to average utilitarianism. Now an action might add 5 utils to total happiness and decrease average happiness by 2 utils. (This could happen, e.g. if you create a new happy person that is less happy than the people who already existed.) Now what do you do, for different values of X?

If you are dealing with two moral theories that can represent the current state of affairs as a number, as in this case, then you could multiply the expected percentage change in that number times the probability that the moral theory seems correct.

This is a great idea for dealing with moral uncertainty on a personal level. Rather than a parliament, though, it sounds to me more like a market in which the various theories have varying amounts of capital and thus varying degrees of purchasing power with respect to future world-states. Something like this is probably what most of us do already without realizing it, which is perhaps why this seems so intuitively compelling to me.

What underlying criteria are you using to determine what "works" and what does not? Do you mean "works mathematically" in some decision-theoretic sense, or do you have broader criteria or something else altogether in mind?

It's appealing (though hard to see why a Parliament is a better device than a market, see Tomasz), but I wouldn't characterize it as a solution. The Parliament (or whatever) is a black box that we have no principled way of simulating.

Determining the results of the parliamentary procedure is not (to me) any easier than determining de novo the best compromise between the competing theories.

Nick--Should we expect prisoners' dilemmas to arise in the negotiating phase? I think generally yes. So presumably, there can be pareto optima outside of the schema.....not intended as an argument against using the method, just worth thinking about...maybe one should stipulate that the delegates one box on Newcomb-like problems, along the lines Eli has discussed on OB. Though, because we'd need to address the moralities which advocate two boxing on Newcomb-like problems, it seems like a meta-version of the original problem is embedded in the Parliament solution: "because many moral theories state that you should not always maximize expected utility." this may hash out simpler than I'm making it, but curious your thoughts...

I've chatted with Toby Ord about what it would take to use this model in Coherent Extrapolated Volition, for which it might well have been designed.

To me it looks like the main issues are in configuring the "delegates" so that they don't "negotiate" quite like real agents - for example, there's no delegate that will threaten to adopt an extremely negative policy in order to gain negotiating leverage over other delegates.

The part where we talk about these negotiations seems to me like the main pressure point on the moral theory qua moral theory - can we point to a form of negotiation that is isomorphic to the "right answer", rather than just being an awkward tool to get closer to the right answer?

But it's clear enough that we need some better method of integration than throwing away all but one answer, picking randomly, summing unscaled utility functions, etc.

@Anonymous:

You lost me here. How do you define "correct" moral theory? Is there something more to it than internal consistency?

Reverse total utilitarianism is just as internally consistent as total utilitarianism, so why doesn't anyone do it? Probably because it's against their moral intuitions. The % chance of a theory being "correct" is a way of formalizing your intuitions.

Tomasz: It might amount to the same thing, depending on how the details are fleshed out. I'm not sure which metaphor is better.

Anonymous: There are many competing accounts of this - it is a large part of what metaethics is all about. It might be best to start by assuming moral realism; and then to consider whether and how the framework might be able to incorporate uncertainty ranging over other metaethical theories as well.

John: Even when two moral theories both represent the value of a world as a number, it doesn't mean that we can feed these numbers directly into an expected utility calculation - for this would only work if the units that those numbers measure are the same. This does not seem to be the case in the example at hand. (Cmp happiness (apples) vs happiness/person (oranges): what is the conversion factor? Is the conversion factor constant or does it vary with population size? Who (which theory) decides this? - Another problematic issue is that if we take the "value-number" that a theory outputs at face value, this would seem to give an unfair advantage to those theories that postulate inflated numbers - bringing up issues related to Pascal's Mugging (http://www.nickbostrom.com/papers/pascal.pdf), which the present framework is also designed to mitigate.

Anonym: In practical terms, I have in mind some sort of reflective equilibrium model as the methodology for determining what "works".

Dave and explicator: Yes there are a number of known issues with various voting systems, and this is the reason I say our model is imprecise and under-determined. But we have some quite substantial intuitions and insights into how actual parliaments work so it is not a complete black box. For example, we can see that, other things equal, views that have more delegates tend to exert greater influence on the outcome, etc. There are some features of actual parliaments that we want to postulate away. The fake randomization step is one postulate. We also think we want to stipulate that the imaginary parliamentarians should not engage in blackmail etc. but we don't have a full specification of this. Also, we have not defined the rule by which the agenda is set. So it is far from a complete formal model.

The system is designed to minimize extremism, but the starting assumption is that you don't know what moral theory is correct, so how do you know extremism is bad and should be minimized?

p.s. should it be (100-X)% ?

How can one give a probability to a moral theory being correct? Let's assume that there exists a moral theory out there which is airtight provable using only very basic facts and logic (the Parfit-Sidgwick stuff). Let's assume again that this theory is uniquely true. Since we don't have this theory today, how can we assign any sort of probability to any theory being true (except for the ones that can be proved inconsistent--we can give them a very low chance of being true)? It seems like any attempt at this will just employ common sense and intuition as a ruler of truth probability, but I can't see why this kind of measurement isn't completely arbitrary and unrelated to any "real" probability distribution.

This parliamentary model feels to me like it ought to be kinda equivalent to something simpler and generalizable: you basically have a bunch of parties (= ethical theories) all trying to optimize their overall satisfaction, with constraints on how much total influence each can exert. There are going to be prisoners' dilemmas and things there, as explicator points out, but it seems like a more global scheme like "maximize total/minimum/weighted-total satisfaction" would do about as well and avoid them. (With a real parliament, if you knew exactly what every delegate cared about and they all knew you knew and trusted you and so forth, you could come up with an overall set of outcomes that they all found at least as good as what they could achieve by negotiation, right?) So then you'll end up with some sort of hairy optimization problem, which might or might not have a nice closed-form solution but in any case won't be any harder to solve than the problem of working out what your delegates would do.

(Is there some reason to think that the parliamentary model is inequivalent to, and better than, this sort of constrained optimization model?)

But when you think about it this way -- or, at least, when *I* think about it this way -- it seems clear that there are a *lot* of arbitrary-seeming choices either implicit in the parliamentary model or left open by it. Just how does the amount of political capital a given party is willing to spend on an issue relate to its importance as measured by the party's theory? Exactly what combination of satisfactions are we maximizing? Doubtless a particular answer or non-answer to that drops out of the parliamentary model once you give a complete description of the delegates' psychology, but why should that be a good answer?

Making those choices feels to me like it's about as hard as solving the original problem...

Eliezer: "can we point to a form of negotiation that is isomorphic to the 'right answer', rather than just being an awkward tool to get closer to the right answer?"

I don't know. One possibility is that the concept of 'right answer' in this context is also a bit under-determined, so that the vague model actually matches our vague concept, and that several different possible precise models might each be equally defensible arbitrary explications of this vague concept. But more likely, it would be possible to make at least a bit more progress before reaching such limits.

Btw, there are two related yet distinct tasks here, both important. One is to develop a model of how to deal with moral uncertainty that we can actually use now. The other is to develop a model that could be used by a seed AI.

It seems to me that the reasons you shy away from simply using expected utility are examples pretty close to standard Pascal's wager examples. So just tackle those sort of problems head on, and use those solutions here. If you are, like me, committed to standard decision theory in those cases, you can continue to use it for moral uncertainty as well.

Why can't you just maximize a weighted sum of everyone's utility (including of those not yet born) across states where each state represents a world in which a given moral view is correct? This gives you a better outcome than your Parliament scheme.

For example, imagine you have moral beliefs A, B, and C and are 49% sure that A is right, 49% sure that B is right and 2% sure that C is correct. You have to make two decisions, the first is RED or BLUE, the second is HOT or COLD. A get $10 more from RED than BLUE, whereas B gets $10 more from BLUE than RED, and C gets the same from either BLUE or RED. A and B both get $9 more from HOT than COLD whereas C gets $1 more from COLD than HOT. Under a weighted sum system the second decision would be HOT whereas under your Parliament scheme (with no money transfers) that second decision could be the obviously inferior COLD. (It's indeterminate what would happen since there will always be two moral beliefs that would vote to move away from any outcome.)


You can always get morally inferior outcomes when you reject the utility maximization criteria because utility is essentially defined as that which a rational person maximizes.


James, utility functions are invariant under affine transformations (scaling by linear translation and positive multiplication). So A' can make all the same decisions as A, but we write down its utility function with a different choice of arbitrary parameters so that A' gets $1000 more from RED than BLUE, causing its preferences to dominate. It's not like there's a little "true utility function" with an absolute zero and scale that hangs off all agents.

Sorry, I withdraw my objections because I didn't take into account your stochastic variable concept.

John Maxwell: The % chance of a theory being "correct" is a way of formalizing your intuitions.

I think the probability analogy is misleading, a 50% chance of finding red/blue ball in the box ultimately means that there is either a completely red or a completely blue ball inside, while 50% "chance" to, say, egoism/total utilitarianism means one's preferences are of "mixed color".

So, what about Virus Moral Theory, that says the only important thing is to take actions that increase perceived importance of itself (various strains of it could propose different actions in neutral situations). Might not work on humans, our hardware isn't that elastic.

Nick Bostrom: It might be best to start by assuming moral realism; and then to consider whether and how the framework might be able to incorporate uncertainty ranging over other metaethical theories as well.

I'll try, although the very first step is already counterintuitive for me.

@Nick Bostrom:

I was speaking of the expected percentage change. The implication is that the conversion factor would be (current number of apples/current number of oranges). To improve accuracy, you could change the conversion factor to (average number of apples over the past century/average number of oranges over the past century).

If total utility is constantly rising while average utility stays more or less constant, a result of doing conversions this way is that you would gradually give less and less weight to individual utils. In other words, as the years went by you would be willing to give up more and more utils to get the same increase in average utils. This might not make sense.

I agree that Pascal's Mugging presents a problem. I would be all over total utilitarianism if it weren't for Pascal's Mugging.

I think it is more common for people to choose a moral system that suits best their subconscious preferences than to adjust their behavior to an independently chosen moral system.

However, since our preferences rarely fit into just one self-consistent moral system, the Parliament, similar to one you describe, already sits in every one of our heads.

Ian: The model permits extremism; but extreme views would have to be very probable in order to get their way on everything. (Yes, 100-X; corrected)

Robin: The problem is not just Pascal's wager example. The problem is also that some moral theories deny that you should maximize expected utility and/or deny that there are such things as utilities, or that the correct moral theory even has that kind of structure, etc. Of course you can just assume that you know for certain that all those theories are false, but that looks to me like moral overconfidence (especially considering that most human beings and perhaps most moral philosophers past and present would probably disagree with you).

Many: Why, o why, do so many on this blog appear to be utterly certain about which moral theory and/or which metaethical theory is correct? Of course, while for many people it is obvious that X is correct, it is a different X for different people that is obviously correct...

g: We would like to be as accommodating as possible to different kinds of moral theories, including those that don't (seem to) define a set of preferences whose satisfaction the theory seeks. For example, a virtue theorist might maintain that X is right if X is what a virtuous person would do, and then they may have a few things to say about what makes a person virtuous. The parliamentary model is about as accommodating of a wide range of (differently structured) moral theories as we have figured out to make it. Of course, we still impute to each ethical theory that it is able to make sense of the notion of what somebody who represented that theory would do in our imaginary parliament; but perhaps one can argue that this is a minimal requirement for a theory to count as a moral theory.

John: Total utilitarianism implies that a 1% change in total utility matters much more if the status quo is 1 trillion utils than if it is 10 utils.

Anonymous: reg virus moral theory: (a) other moral theories would have good reason to oppose the recommendations of such a theory, but also (b) there might nevertheless need to be some rules to prevent gaming the system - we already recognize that we need to prevent blackmail etc.

Nick, any set of actions that meet certain consistency conditions can be described as if maximizing some expected utility. Is it really clear that those moral theories recommend that your actions violate those conditions?

Marcello Herreshoff and I worked on a voting method for this situation. (To clarify, the method described is what you would pretend you're using; you would actually just choose the action with the most voting mass.)

This is probably a decent approximation of what often goes on in our heads without us realizing it, but not a good way to run an AGI. Yet, I think you are addressing one of the most important problems.

Moral uncertainty, to me, has been my main frustration in reading Eliezer's recent posts, and I voiced this uncertainty. I am not afraid to admit that I am not sure what the correct answer is on all fundamental moral questions. In weighing utilitarianism, negative utilitarianism, and kantianism I feel like those audiences in comedy movies that just agree with whomever is speaking at the moment.

A lot of these ethical frameworks seem to me to be reaching toward some common value system, but they just have different emphases. In theoretical terms, even enlightened egoism isn't all that far away from enlightened altruism, and neither are ar from utilitarianism. One problem is, as all of us here understand, that we are not always rational.

And within utilitarianism, I very much sympathize with negative utilitarianism since eliminating suffering does seem to be the most urgent moral imperative. Yet, perhaps focusing on that too much would have consequences similar to the precautionary principle and cause us to neglect the suffering of future persons and the potential good we can do by not spending all of our resources tackling that problem. It is not that this is an inherent flaw in negative utilitarians, just that this is a flaw in humans.

Similarly, focusing too much on egoism, could cause us to ignore the interdependency of people... which would be against our self interest.

Also, by focusing too much on consequences you may get caught up in the unpleasant means to the point where that is all you have left. Anthony Giddens's theory of structuration states that society perpetually reproduces itself from the routinized actions, transactions, and discourse of its members, since those become the norms that everyone sees and internalizes. If, say, killing people is your routine means to an end, you banalize evil and before you know it you are a tyrant.

http://en.wikipedia.org/wiki/Theory_of_structuration

It would be great if there was some neat and tidy rubric we could all live by, but nobody has any great answers. Of course, while we in general have more in common than not, there are also fundamental differences in values.

I don't see how the Parliamentary Model ameliorates moral uncertainty, human irrationality, or differences in values. One thing that can certainly help is for people to simply put more thought into ethical questions.

As for the differences in opinion, the only solution I can think of is to give everyone their own fully customizable virtual world. That way we can all be happy without infringing on the happiness of others. That seems to be a potential outcome of Coherent Extrapolated Volition, and would be a good societal goal.

Not all moral theories need conflict. Some moral theories may turn out to be special cases of others. For instance, I hypothesize that virtue ethics is a special case of consequentialist-utilitarianism, but consequentialist-utilitarianism is in turn only a special case of aesthetics.

the only solution I can think of is to give everyone their own fully customizable virtual world. That way we can all be happy without infringing on the happiness of others.

I think many agree that sufficiently detailed human simulations are indistinguishable from humans, so full customizability is incompatible with not infringing on the happiness of others.

The implied definition of 'fundamental moral issues' in use here eludes me. It seems like there may be more information embedded in the selection of the particular definition of 'fundamental moral correctness' than there is in the remainder of the post. If a clear definition of 'moral' was included or refered to I could accept it for the sake of the argument without suspecting I'm having premises foisted on me behind my back.

I think we should redefine "morality" to mean "practicality," and then assign a goal to this practicality, before we're talking about anything but contextless universals.

For example, morality is practicality with the goal of making us a stable, more intelligent species that colonizes at least 7 planets.

Robin: I think the Kantian's "Don't lie!" doesn't mean "Assign a large negative utility to lies." For example, Kant might maintain that you should not lie now even in order to prevent yourself from telling ten lies later. - But if you had a good translation manual that enabled us effectively to translate all moral theories into utility functions, *in ways that do justice to each theory on its own terms*, then part of the problem would go away.

Cameron: "It seems like there may be more information embedded in the selection of the particular definition of 'fundamental moral correctness' than there is in the remainder of the post." - That may well be true. We try to make progress on one problem at a time.

Brett: "morality is practicality with the goal of making us a stable, more intelligent species that colonizes at least 7 planets." - Ah, why didn't I think of that.

Nick, I don't see how someone who doesn't lie violates any of the consistency axioms that imply their decisions can be described by expected utility theory.

But if you had a good translation manual that enabled us effectively to translate all moral theories into utility functions, *in ways that do justice to each theory on its own terms*, then part of the problem would go away.

Nick, I may well be missing something, but since your theory delegates bargain to achieve certain voting outcomes in the parliament, and since (they believe) these outcomes are probability distributions over actions taken, don't they reveal a preference ordering over gambles? If so, don't von Neumann-Morgenstern kick in to give you a utility function -- or do you anticipate delegate behavior that doesn't satisfy their axioms? (I'm not familiar enough with these things to think I have a deep insight to offer, I'm just hoping to understand better what you're getting at :-))

Oops, Robin snuck in. [So unless I misunderstand him too, my question actually made sense! Ha. =-)]

I agree with Nick that people are overconfident about morality, and I agree with Robin that Nick's main concern really is Pascal's Wager and similar situations. He is clearly talking about such situations when he mentions "dangerous and unstable extremism." In fact, Robin indicates the same concern when he says "So just tackle those sort of problems head on, and use those solutions here."

Of course, the idea of "use those solutions" overconfidently assumes in advance that Pascal's Wager is something to "solve," rather than something to give in to. This overconfident assumption seems common to Robin and Nick (although held much more strongly by Nick, since Robin has indicated his willingness, for example, to a bet in favor of the existence of UFOs, given a high enough return on the wager.)

What Robin and Benja said. The utilitarian framework is very flexible. You might not get a neat utility function if you try and shoehorn other systems into it, but you will at least get something - and that's all you need in order to perform analysis.

Robin, the problem would be with the Archimedean axiom. Let P be eating two pies, Q be eating a pie, and R be telling a lie. P > Q > R. The Archimedean axiom says we can find a probability A such that (AP + (1-A)R) > Q. But someone who "does not tell any lies" will never trade off a probability of telling a lie against a probability of eating an extra pie, no matter how small the former and how large the latter.

Unless Nick has something else in mind?

If I understand things right, the problem you raise is to do with the concept of a real-valued utility function. IMO, John Conway effectively showed that you really should use surreal numbers to represent value in many games - and if you do that, this problem goes away.

Eliezer, point. (Even if your reply was to Robin, not me...)

Tim, that sounds interesting. What should I read to learn more? Just anything on surreal numbers, or something specific?

"Check out Knuth’s Surreal Numbers, Conway & Guy’s Book of Numbers, or for more advanced users, Conway’s On Numbers and Games" - or search on the internet. For go players, see "Mathematical Go: Chilling Gets the Last Point" by Elwyn Berlekamp and David Wolfe.

My references on the topic of applications of surreal numbers to economic expected utility theory are - alas - rather limited, but surreal numbers crop up fairly frequently in the context of Pascal's Wager - e.g. see here.

many moral theories state that you should not always maximize expected utility

It is certainly true that, according to many moral theories, it is not the case that you always ought to maximize expected good. It is not clear, however, why this is incompatible with handling moral uncertainty by maximizing "intertheoretic moral value". After all, many of those who deny that morality requires you to maximize expected good also affirm that prudence requires you to maximize your expected wellbeing. Expected utility theory can be used to represent different phenomena, and you can disagree that some phenomena can be represented with the apparatus of expected utility theory while agreeing that other phenomena can be so represented.

Robin, I had in mind the case where lying is absolutely forbidden, yet you may know that you will lie in the future and you that by lying now you can reduce the number of future lies you will tell in the future; and yet you should not lie now. How do you read off a sensible cardinal utility function from such a theory such that you can directly compare it with the utility function of, say, a classical utilitarian?

There is some discussion in the literature on whether deongologies can be "consequentialized"; but even if that were possible, it may not suffice for our present purposes - since what we need here is not simply a result showing that one could find some utility function that reproduced the theory's preference order among all available actions; rather, we would need a recipe that generated a unique utility function (such that it could then be weighed in an expected utility calculation against the utility functions postulated by more straightforwardly consequentialist theories).

Eliezer, yes that would be another sticky point in trying to consequentialize deontology. (So one could try assign some form of infinite negative value to any act of lying - but then we get into all the difficulties with infinite ethics... http://www.nickbostrom.com/ethics/infinite.pdf)

Tim, thanks! "An Introduction to Conway's Games and Numbers" seems good (I'm about 1/4 through). It's not yet clear to me what the benefits of the surreals over say the hyperreals would be for expected utility (if any), but they seem well worth learning about, in any case.

Nick, imagine I had the choice to enter one of several universes, in which life is very different. In one universe I live in a star, in another universe I'm a vast molecular cloud, and in a third I'm a small chunk of a neutron star. To make this choice, I would also face the difficult choice of how to weigh things in those very different universes. This degree of difference would make it a hard choice, but hardly take the choice out of the realm of expected utility theory. Similarly, you will indeed need to figure out how to compare elements of different moral theories if your choices must weigh them. This does mean it is a hard choice problem, but it hardly means expected utility theory goes away.

This seems like quite a fantastic bit of wish-fulfillment to me. We *wish* democratic morality (or rather, just plain democracy) were effective, therefore we come up with a rational framework for meta-morals where democratic representation *is* effective. In fact the whole idea hinges on a very unrealistic (formally impossible) assumption, the assumption of some a priori oracle by which the "probability" of a given framework is decided, thus allocating the (non-meta) democratic vote. Such meta-agreement, given conflicting subjective moral frameworks, is just as unlikely as democratic agreement (i.e, subject to the same constraints as regular democratic optimization, cf. Arrow) in the first place --- thus you haven't introduced anything novel, or useful, here. Sorry --- quite a fan of your usual work --- but skeptical in this matter.

jb

Robin, the choice situation you describe seems to be a prudential choice about deciding what's best for you rather than a moral choice about deciding what ethics requires from you. Very few dispute that prudence requires you to maximize your own expected wellbeing, but many dispute that morality requires you to maximize the expected net sum of everyone's welfare. So the choice situation you describe may be not hard enough to make you doubt that you should use expected utility theory to represent the demands of prudence and yet be sufficiently hard to make you doubt that you should use this theory to represent the demands of morality.

(Incidentally, the debate over consequentializing deontologies is not specific to expected utility theory. Deontologists deny that you always ought to do what's best even when there is no uncertainty involved.)

Suppose morality m1 says that A > B > C,
morality m2 says B > A > C,
morality m3 says C > A > B.
You use your parliament repeatedly to decide on action A, B, or C.

You assign votes to 1 and 2 based on your estimated "probability" of their "correctness". You give 40% to m1, 30% to m2, and 30% to m3.

If the representatives think you will choose your action probabilistically, they don't cooperate: m1 votes A, m2 votes B, etc. You always take action A. This is the highest "expected-value" result.

If you want the actions B and C to be taken sometimes, this can happen if the reps know that you take the winner each time. Because then, m2 and m3 must collude with each other again m1. m1 sees that it can always be outvoted by m2 and m3, so it will make deals with m2 and m3. Eventually they reach a cyclic equilibrium that results in alternating actions A, B, and C.

Do you want to achieve the highest expected moral value? Then you want a voting system that will always choose action A, given those parameters. Do you want to minimize the average dissatisfaction of each party? Then you want the reps to know that winner-takes-all, which will lead to alternating A, B, C, A, A, B, etc.

In short: I don't think you can choose an arbitration mechanism without committing to one of the utility-aggregating schemes that the arbitration mechanism is supposed to choose between.

Eliezer, even a deontologist is willing to trade off a higher probability of telling a lie against other desirable things. Your probability of teling a lie cannot remain absolutely constant: and the it is equally impossible that every single one of your actions will drive the probability lower. So some of your actions must increase the probability of telling a lie, even for a deontologist.

Nick, a deontologist doesn't have to assign a negative infinite value to telling a lie, in order to ensure that he will never lie for the sake of pie. He just has to assign a limit to how much value pie can have, no matter how many pies are involved, due to marginal utility, and then assign a sufficiently high negative value (far beyond the pie limit) to lying. This does mean that the deontologist will sometimes trade off an extremely small increase in the probability of lying, for the sake of pie, as I just said to Eliezer: but he would never outright make up his mind directly to tell a lie, in order to get pie.

Robin, the problem would be with the Archimedean axiom. Let P be eating two pies, Q be eating a pie, and R be telling a lie. P > Q > R. The Archimedean axiom says we can find a probability A such that (AP + (1-A)R) > Q. But someone who "does not tell any lies" will never trade off a probability of telling a lie against a probability of eating an extra pie, no matter how small the former and how large the latter.
(You mean to say, we can find a probability A < 1 such that etc.)

But you're assuming that R can't be -infinity. A utilitarian who will not lie is someone who sets U(lying) = -inf.

Say there's two possible events, A and B, and we're uncertain about their relative moral value (we have some probability distribution for U(A)/U(B)). Say we're asked to choose whether to trade 3 instances of A for one instance of B. Do we calculate the expected value of U(A)/U(B), or the reciprocal of the expected value of U(B)/U(A), which is a completely different number? I don't think ordinary expected utility works here unless we have probability distributions for U(A) and U(B) individually in units of some free-floating objective utility.

It's not yet clear to me what the benefits of the surreals over say the hyperreals would be for expected utility (if any)

I haven't looked at the hyperreals in this context. However, the whole motivation for developing surreal numbers was to solve decision problems in the game of go. Given a board state, how to value the possible moves? Essentially, Conway found integers were not enough, that real numbers only worked with an ugly fudge involving very small numbers, and that surreal numbers were the neatest solution. Game theory is decision theory in a different dress - so exactly the same math applies in both places.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31