« I'll Think of a Reason Later | Main | Who Cheers The Referee? »

December 16, 2008


Why shouldn't we focus on working out our preferences in more detail for the scenarios we think most likely? If I think it rather unlikely that I'll have a genie who can grant three wishes, why should I work hard to figure out what those wishes would be? If we disagree about what scenarios are how likely, we will of course disagree about where preferences should be elaborated in the most detail.

I mean - most agents with utility functions shouldn't have such a hard time describing their perfect universe.

As I understand it, most organisms act as though they want to accelerate universal heat death.

That's been an important theory at least since Lotka's Contribution to the Energetics of Evolution, 1922 - and has been explicated in detail in modern times by Roderick Dewar with the MEP.

To change that would require self-directed evolution, a lot of discipline - and probably not encountering any aliens - but what would the motivation be?

Setting aside the arbitrary volumes and dimensions, and ingnoring all the sprarkle and bling, the good Rev.'s Heaven consists of: "All inhabitants are honest and there are no locks, no courts, and no policemen."

Basically: agreement on what consititues life, liberty and property, then perfect respect for them. That may actually approximate Utopia, at least as far as Homo Sapiens is concerned.

Robin: I think Eliezer's question is worth thinking about now. If you do investigate what you would wish from a genie, isn't it possible that one of your wishes might be easy enough for you to grant without the genie? You do say you haven't thought about the question yet, so you really have no way of knowing whether your wishes would actually be that difficult to grant.

Questions like "what do I want out of life?" or "what do I want the universe to look like?" are super important questions to ask, regardless of whether you have a magic genie. I personally have had the unfortunate experience of answering some parts of those question wrong and then using those wrong answers to run my life for a while. All I have to say on the matter is that that situation is definitely worth avoiding. I still don't expect my present set of answers to be right. I think they're marginally more right than they were three years ago.

You don't have a genie, but you do have a human brain, which is a rather powerful optimization process and despite not being a genie it is still very capable of shooting its owner in the foot. You should check what you think you want in the limiting case of absolute power, because if that's not what you want, then you got it wrong. If you think the meaning of life is to move westward, but then you think about the actual lay of the land hundreds of miles west of where you are and then discover you wouldn't like that, then it's worth trying to more carefully formulate why it was you wanted to go west in the first place, and once you know the reason, maybe going north is even better. If you don't want to waste time moving in the wrong direction then it's important to know what you want as clearly as possible.

maximized freedom with the constraint of zero violence. violence will always exist as long as there is scarcity, so holodecks + replicators will save humanity.

"Questions like "what do I want out of life?" or "what do I want the universe to look like?" are super important questions to ask, regardless of whether you have a magic genie. I personally have had the unfortunate experience of answering some parts of those question wrong and then using those wrong answers to run my life for a while."

I think that asking what you want the universe to look like in the long run, has little or no bearing on how to live your life in the present. (Except insofar as you direct your life to planning the universe's future.) The problems confronted are different.

"The default, loss of control, followed by a Null future containing little or no utility. Versus extremely precise steering through "impossible" problems to get to any sort of Good future whatsoever."

But this is just repeating the same thing over and over. 'Precise steering' in your sense has never existed historically, yet we exist in a non-null state. This is essentially what Robin extrapolates as continuing, while you postulate a breakdown of historical precedent via abstractions he considers unvetted.

In other words, 'loss of control' is begging the question in this context.

Phil: Really? I think the way the universe looks in the long run is the sum total of the way that peoples lives (and other things that might matter) look in the short run at many different times. I think you're reasoning non-extensionally here.


Do you think singleton scenarios in aggregate are very unlikely? If you are considering whether to push for a competitive outcome, then a rough distribution over projected singleton outcomes, and utilities for projected outcomes, will be important.

More specifically, you wrote that creating entities with strong altruistic preferences directed towards rich legacy humans would be bad, that the lives of the entities (despite satisfying their preferences) would be less valuable than those of hardscrapple frontier folk. It's not clear why you think that the existence of agents with those preferences would be bad relative to the existence of obsessive hardscrapple replicators. What if, as Nick Bostrom suggests, evolutionary pressures might result in agents with architectures you would find non-eudaimonic in similar fashion? What if hardscrapple replicators find that they can best expand in the universe by creating lots of slave-minds that only care about executing instructions, rather than intrinsically caring about reproductive success?


Scarcity can be restored very, very, shortly after satiation with digital reproduction and Malthusian population growth.

I am unimpressed by Robin's answer. With unlimited power there is no reason to think hard, seek advice or be careful. Just do whatever the hell you want, and if something bad results you can always undo. But of course, you don't need to undo because you have unlimited foresight. So, all you have to do is do a brute force search of the space of all possible actions, and then pick the one with the consequences that you like the most. Simple.

For me Marcello's comment resonates, as does the following
from _Set Theory with a Universal Set_ by Thomas Forster. I am basically some kind of atheist or agnostic, but for me the theme is religion in the etymological sense of tying back, from the infinite and paradoxical to the wonder, tedium and frustration of everyday life and the here and now. (I dream of writing a book called Hofstadter, Yudkowsky, Covey: a Hugely Confusing YES!)


"However, it is always a mistake to think of anything in mathematics as a mere pathology, for there are no such things in mathematics. The view behind this book is that one should think of the paradoxes as supernatural creatures, oracles, minor demons, etc. -- on whom one should keep a weather eye in case they make prophecies or by some other means inadvertently divulge information from another world not normally obtainable otherwise. One should approach them as closely as is safe, and from as many different angles as possible."

Somewhere else in the book, he talks about trying to prove a one of the contradictions (of naive set theory) in one of the axiomatic systems presumed to be consistent, and seeing what truths are revealed as the exploded bits of proof spontaneously reassemble. Things like the magic of recursion as embodied in the Y combinator.

Thus I value people like Eliezer trying to ponder the imponderable.

Carl Shuman
that is why I will create a solar powered holodeck with built in replicator, and launch myself into deep space attached to an asteroid with enough elements for the replicator.

rest of humanity can go to hell.


A *solar powered* holodeck would be in trouble in *deep space,* particularly when the nearby stars are surrounded with Matrioshka shells/Dyson spheres. Not to mention being followed and preceded by smarter and more powerful entities.

This is a silly line of argument. You can't hold identity constant and change the circumstances very much.

If I were given unlimited (or even just many orders of magnitude more than I now have) power, I would no longer be me. I'd be some creature with far more predictive and reflective accuracy, and this power would so radically change my expectations and beliefs that it's ludicrous to think that the desires and actions of the powerful agent would have any relationship to what I predict I'd do.

I give high weight (95%+) that this is true for all humans, including Robin and Elizer.

There is no evidence in impossible predictions based on flawed identity concepts.

Oh, I keep meaning to ask: Elizer, do you think FAI is achievable without first getting FNI (friendly natural intelligence)? If we can't understand and manipulate humans well enough to be sure they won't destroy the world (or create an AI that does so...), how can we understand and manipulate an AI that's more powerful than a human?

Joe Teicher, are you ever concerned that that is the current case? If a universe is a cellular automaton that cannot be predicted without running it, and you are a demiurge deciding how to implement the best of all possible worlds, you just simulate all those worlds, complete with qualia-filled beings going through whatever sub-optimal existence each offers, then erase and instantiate one or declare it "real." Which seems entirely consistent with our experience. I wonder what the erasure will feel like.

obvious answer: ask me again when i've used my unlimited power to increase my intelligence as much as is physically possible

hmm true.
alright. fission reactor with enough uranium to power everything for several lifetimes (whatever my lifetime is at that point) and accelerate the asteroid up to relativistic speeds. aim the ship out of the galactic plane. the energy required to catch up with me will make it unprofitable to do so.

Marcello, I won't say any particular possible scenario isn't worth thinking about; the issue is just its relative importance.

Carl, yes of course singletons are not very unlikely. I don't think I said the other claim you attribute to me.


In that case can you respond to Eliezer more generally: what are some of the deviations from the competitive scenario that you would expect to prefer (upon reflection) that a singleton implement?

On the valuation of slaves, this comment seemed explicit to me.

"Why does anything exist in the first place?" or "Why do I find myself in a universe giving rise to experiences that are ordered rather than chaotic?"

So... is cryonics about wanting to see the future, or is it about going to the future to learn the answers to all the "big questions?"

To those who advocate cryonics, if you had all the answers to all the big questions today, would you still use it or would you feel your life "complete" in some way?

I personally will not be using this technique. I will study philosophy and mathematics, and whatever I can find out before I die - that's it - I just don't get to know the rest.

I see the valuable part of this question not as what you'd do with unlimited magical power, but as more akin to the earlier question asked by Eliezer: what would you do with $10 trillion? That leaves you making trade-offs, using current technology, and still deciding between what would make you personally happy, and what kind of world you want to live in.

Once you've figured out a little about what trade-offs between personal happiness and changing the world you'd make with (practically) unlimited (but non-magical) resources, you can reflect that back down to how you spend your minutes and your days. You don't make the same trade-offs on a regular salary, but you can start thinking about how much of what you're doing is to make the world a better place, and how much is to make your self or your family happier or more comfortable.

I don't know how Eli expects to get an FAI to take our individual trade-offs among our goals into account, but since my goals for the wider world involve more freedom and less coercion, I can think about how I spend my time and see if I'm applying the excess over keeping my life in balance to pushing the world in the right direction.

Surely you've thought about what the right direction looks like?

'Precise steering' in your sense has never existed historically, yet we exist in a non-null state.

Aron, Robin, we're only just entering the phase during which we can steer things to either a really bad or really good place. Only thinking in the short term, even if you're not confident in your predictions, is pretty irresponsible when you consider what our relative capabilities might be in 25, 50, 100 years.

There's absolutely no guarantee that humanity won't go the way of the neanderthal in the grand scheme of things. They probably 'thought' of themselves as doing just fine, extrapolating a nice stable future of hunting, gathering, procreating etc.

Marcello, have a go at writing a post for this site, I'd be really interested to read some of your extended thoughts on this sort of thing.

Dagon has made a point I referred to in the previous post: in the sentence “I have unlimited power” there are four unknown terms.

What is I? What does individuality include? How is it generated? Eliezer does not consider the evasive notion of self, because he is too focused on the highly hypothetical assumption of “self” that we adhere to in Western societies. However, should he take off the hat of ingenuity for a while, he would discover that the mere defining of “self” is extremely difficult, if not impossible.

“Unlimited” goes in the same basket as “perfect”. Both are human concepts that do not work well in a multidimensional reality. “Power” is another murky concept, because in social science it is the potential ability of one agent to influence another. However, in your post it seems we are talking about power as some universal quality to manipulate matter, energy, and time. One of the few things that quantum mechanics and relativity theory agree about, is that it is probably impossible to do it.

“I have unlimited power” implies total volitional control of a human being (presumably Robin Hanson), over the spacetime continuum This is much more ridiculous, because Robin is part of the system itself.

The notion of having such power, but being somehow separated from it is also highly dubious. Dagon rightly points to the transformative effect such “power” would have (only that it is impossible :) ). Going back to identity: things we have (or rather, things we THINK we have), do transform us. So Eliezer may want to unwind the argument. The canvas is flawed, methinks.

Ben: "There's absolutely no guarantee that humanity won't go the way of the neanderthal in the grand scheme of things."

Are you actually hoping that won't happen? That we'll still be human a million years from now?

Phil: extinction vs. transcendence.

V.G., Eliezer was asking a hypothetical question, intended to get at one's larger intentions, sidestepping lost purposes, etc. As Chris mentioned, substitute wielding a mere outrageous amount of money instead if that makes it any easier for you.

You know, personally I think this strategy of hypothetical questioning for developing the largest deepest meta-ethical insights could be the most important thing a person can do. And it seems necessary to the task of intelligently optimizing anything we'd call moral. I hope Eliezer will post something on this (I suspect he will), though some of his material may touch on it already.

Phil: "We" don't all have to be the same thing.

Jason, please see my comment in the next Eliezer post.

Phil, what Vlad and Nick said. I've no doubt we won't look much like this in 100 years, but it's still humanity and its heritage shaping the future. Go extinct and you ain't shaping nothing. This isn't a magical boundary, it's a pretty well-defined one.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30