Today at lunch I was discussing interesting facets of second-order logic, such as the (known) fact that first-order logic cannot, in general, distinguish finite models from infinite models. The conversation branched out, as such things do, to why you would want a cognitive agent to think about finite numbers that were unboundedly large, as opposed to boundedly large.
So I observed that:
- Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken. (I go into some detail on this possibility below the cutoff.)
- If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.
And the one said, "Isn't that a form of Pascal's Wager?"
I'm going to call this the Pascal's Wager Fallacy Fallacy.
You see it all the time in discussion of cryonics. The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life." And the other one says, "Isn't that a form of Pascal's Wager?"
The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).
However, what we have here is the term "Pascal's Wager" being applied solely because the payoff being considered is large - the reasoning being perceptually recognized as an instance of "the Pascal's Wager fallacy" as soon as someone mentions a big payoff - without any attention being given to whether the probabilities are in fact small or whether counterbalancing anti-payoffs exist.
And then, once the reasoning is perceptually recognized as an instance of "the Pascal's Wager fallacy", the other characteristics of the fallacy are automatically inferred: they assume that the probability is tiny and that the scenario has no specific support apart from the payoff.
But infinite physics and cryonics are both possibilities that, leaving their payoffs entirely aside, get significant chunks of probability mass purely on merit.
Yet instead we have reasoning that runs like this:
- Cryonics has a large payoff;
- Therefore, the argument carries even if the probability is tiny;
- Therefore, the probability is tiny;
- Therefore, why bother thinking about it?
(Posted here instead of Less Wrong, at least for now, because of the Hanson/Cowen debate on cryonics.)
Further details:
Pascal's Wager is actually a serious problem for those of us who want to use Kolmogorov complexity as an Occam prior, because the size of even the finite computations blows up much faster than their probability diminishes (see here).
See Bostrom on infinite ethics for how much worse things get if you allow non-halting Turing machines.
In our current model of physics, time is infinite, and so the collection of real things is infinite. Each time state has a successor state, and there's no particular assertion that time returns to the starting point. Considering time's continuity just makes it worse - now we have an uncountable set of real things!
But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.
The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.
On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".
So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.
And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.
"Most AIs gone wrong are just going to disassemble you, not hurt you. I think I've emphasized this a number of times, which is why it's surprising that I've seen both you and Robin Hanson, respectable rationalists both, go on attributing the opposite opinion to me."
My apologies. I went back over some of your writings, sure I would find contradicting evidence, but it seems I am suffering from recall bias: the most horrifying scenarios were just the ones that came to mind most easily.
Posted by: Yvain | March 20, 2009 at 10:41 AM
Yvain wrote: "I know that many of the very elderly people I know claim they're tired of life and just want to die already, and I predict that I have no special immunity to this phenomenon that will let me hold out forever. But I don't know how much of that is caused by literally being bored with what life has to offer already, and how much of it is caused by decrepitude and inability to do interesting things."
I think it definitely has to do with senescence and decrepitude. I bet that in the past when life expectancy was much shorter, people of 50 years of age (in bad shape) felt the same as some 80 years old do now. Nobody likes to suffer, nobody likes impotence. Remove those and that changes a lot of things.
If you had the body and mind of a 30 years old, I doubt you'd feel like that. I expect the universe to be big and varied enough to entertain someone for quite a while. Maybe if everything stayed totally static it could get boring, but I expect arts, science, technology, politics, etc, to keep changing.
Posted by: Michael G.R. | March 20, 2009 at 12:55 PM
Yvain:
It's hard to develop an AI that does as you say. It looks like it's easier to develop an AI that does as you want. People in the government of Comminust China are not mutants. They are just like other people. So, if the Communist China develops AGI, it's again more likely to be either a FAI or a Paperclipper AI than an Evil Communist AI.
Posted by: Vladimir Nesov | March 20, 2009 at 02:43 PM
Vladimir: I don't think the Communists would create an evil AI, but I don't think they'd create an Eliezer-style friendly AI either. I think they'd create an AI that does what they tell it. I don't think such a world would be Hell, but I don't think it would be any better than Communist China today, and it would bear the addition problem that you couldn't circumvent the censors and you'd have no hope of escaping or overthrowing it.
The Chinese wouldn't immediately become evil mutants when creating an AI, but they wouldn't immediately become peace-and-freedom hippies either. Keep in mind that one of the most surprising aspects of the SIAI's plan is that they don't intend to just program it to enact their own values all over the world. It's possible that absolute power mellows people out because they don't have to be so paranoid (see Mencius' post about Fnargl) but I wouldn't count on it.
TGGP: The hedonic set point is good point, but not easy to grok. Taken literally, it would mean that North Korean refugees who flee to South Korea are wasting their time, and that you should be equally willing to move to Burma as to eg the UK. It also implies that fighting to end dictatorship/help the economy/promote good policies is a stupid goal since it doesn't help anyone. I'm still struggling to understand the implications of this for normal everyday morality, but until I do I'd rather not use it for cryonics.
Michael: Good point. I'm currently reconsidering my opposition in light of Eliezer's explanation that he thinks dystopian AI is unlikely.
Everyone: This next argument isn't My Real Objection, and discussing it will have no bearing on whether I sign up for cryonics or not, but I was thinking about it earlier today. Given MWI, I can assume that in some Everett branch I'll probably remain alive no matter what (I can even ensure this by generating a random number and signing up for cryonics if it falls within a specific small range). Although I do identify with my cryonically revived body, I don't identify with it any more than I identify with an identical Yvain from another Everett branch. Doesn't that mean that as long as I don't have a goal of maximizing the number of Yvains in the multiverse, I can satisfy my goal of continuing the existence of a being with whom I identify without signing up for cryonics, or by signing up for cryonics only if a coin comes up heads a hundred times in a row? (one reason this isn't my true objection: it implies that I should be indifferent to committing suicide in the present. I don't know *why* it's wrong, though. And I can't be the first person to think of this.)
Posted by: Yvain | March 20, 2009 at 04:05 PM
Yvain:
No, really. (It sounds like you missed my argument, since you just restated your position in greater detail.) I think it's very hard to create an AI that does as you say. It'd be very hard to create an AI that follows government's orders, without screwing them up to a point of dismantling the world. It looks like a much simpler concept to create an AI that follows the deeper intentions of specified agents, and since those agents are not mutants, the intention should be fine for other people too. So, I expect the China AI to either dismantle the world by mistake, or to be an "Eliezer-style" FAI, and I don't expect an orders-following AGI.
P.S. The argument from MWI suicide is wrong because you care about measure of things, the same way you care about probability in decision theory. You don't want to just win, you also want to win with sufficient probability/measure.
Posted by: Vladimir Nesov | March 20, 2009 at 10:20 PM
Johnicholas, can you comment on this?
At AGI 2009, Selmer Bringsjord presented a paper, General Intelligence and Hypercomputation, which says:
When he says that they "use formal schemes that cannot even be represented", he is obviously wrong, since I assume these schemes were published in books that would still be readable if reduced to bitmaps.
Is there any sense to the rest of his argument? I would be shocked if the answer were yes, but I know nothing about "hypercomputation".
Posted by: Phil Goetz | March 21, 2009 at 09:42 PM
Johnicholas, the set of real numbers does not have a 1:1 mapping onto a set of string names.
For example, what string corresponds to pi?
Posted by: Leo Petr | March 26, 2009 at 12:37 PM
Great post, Eliezer!
I'm not sure why people suggest that Islam counterbalances against Christianity more than against atheism. It's true that belief in the divinity of Jesus contradicts the tawhid of Allah, and for that reason many Muslims do belief Christians go to hell. But there are also some early suras in the Qur'an suggesting that Christians, as "people of the Book," will be saved (e.g., 2:62, 3:113-15, 3:199, 5:82-85). In contrast, belief in God is a definite requirement for salvation, so Allah would most likely send atheists to hell.
Posted by: Utilitarian | March 26, 2009 at 11:49 PM
I think the amount of resources (say time and money) you have is crucial. Even with super-large amounts of resources I wouldn't spend time on believing in Christianity for, among others, reasons pointed out in the article.
Having $1000 dollar to spend every month, would I spend $50 of these on cryonics insurance? No! I think it is more rational to invest those money in high-risk stock. Maybe the way to Very Long Lifespan turn out to be uploading, or just continually fixing broken parts of the body, or a any of a range of other possibilities.
Someone might point out that cryonics is available today and thus stands out. Sure, but what if I survive for fifty more years, a rejuvenation technology appears and I cannot afford it?
Posted by: Daniel | March 29, 2009 at 07:51 AM
Private investment groups GEM (Global Emerging Markets) Global Yield Fund Limited and GEM Investment Advisors Inc. had committed investing up to P300 million in local technology firm IPVG Corp. through the purchase of new IPVG shares and shares from existing shareholder Elite Holdings Inc.
The investment, which is for primary as well as secondary shares, involves both the company (IPVG) and one of the principal shareholders (ELITE).
The agreement also provides that IPVG shall issue to GEM or to GEM’s order, one or more warrant(s) to subscribe for up to 30 million shares.
The funding will be used for IPVG’s future business activities and engagements, and for the expansion of its operating subsidiaries.
IPVG CEO Enrique Gonzalez said the investment provides IPVG financing for expansion and for the organic capital requirements of our business subsidiaries. We welcome GEM’s entry into our company as they bring with them a strong track record in private equity and capital markets from their investment activities around the world.
“Despite a challenging global macro environment, this deal is evidence that well run companies can attract smart capital,” Gonzalez closes.
The GEM Group, comprising GEM Investment Advisors Inc. and GEM Global Yield Fund Limited and their affiliates, which was founded in 1991, is a $2.7 billion alternative investment firm engaged in the management of a diverse set of investment tools centered on emerging markets all over the world.
Posted by: raivo pommer-. | April 30, 2009 at 11:40 AM