« Update Yourself Incrementally | Main | One Argument Against An Army »

August 15, 2007


Don't many people walk down the street talking to small boxes? They are talking to them everywhere, and the ones who are not talking to small boxes are listening to different small boxes or typing into small boxes. I am perhaps odd in seeing the book Rainbows End as the natural next step in this.

I have been linking Nick Bostrom's 2003 paper and Robin Hanson's 2001 ruminations on it whenever I see someone reference the NY Times column. Your comments, Robin, by the way, still link to the 2001 .doc version with "do not circulate" at the top.

If we have updated our priors to expect a certain degree of strangeness from this blog or elsewhere, to what extent have we come to see that as expected rather than strange? I am still shaky on how to quantify P(unexpected).

On strangeness, Bryan Caplan once claimed to be not quite strange enough to unilaterally transition from shaking hands to bowing. As it turns out, the only idea a GMU economist could propose that would sound strange to me is that there is an idea so strange than a GMU economist could not propose it.

Of course God's a jock--think of all the heavy lifting he's had to do over the years. . . . Seriously, though, I have to admit there's something silly about people taking this sort of idea seriously. Maybe if the reporter had interviewed some scientists working in hands-on areas such as biochemistry, water transport, etc., he would've expressed a little more skepticism. The funny thing is that the article is played so straight that I can't quite figure out if the reporter (or, for that matter, the people quoted) are serious or if they're just having fun. Probably a mixture of the two.

what is the strangest scenario that authority could convince you?

Any of those authorities could, in the right circumstances, convince me of anything, no matter now strange (right circumstances meaning things like lots of backing evidence, repeated reporting of the event in question, quotes from experts, a year or two of repeating the same story without substantial refutations, etc...).

Though I personally think that very strange things are easier to get people to accept than things that are only slightly strange (consider the contrast between statements like "free markets boost the economy" and "mother Teressa was seen yesterday in a jar of mayonnaise" versus the much stranger - but much more believed! - "we are made of quarks and electrons" and "God exists").

I'd call this one hindsight bias. People learn history, and they see that, throughout history, strange scenarios (such as women not being allowed to vote) have transitioned to more normal scenarios (such as women voting). In the history of science, naive and patently absurd beliefs (such as the Earth being flat) have been steadily replaced by ideas which actually sound quite plausible to the person on the street (such as the Earth being spherical). The whole story of human history, seen in hindsight, is a steady decrease in absurdity. Thus, the idea that in the future things will actually be stranger is nothing less than absurd, and we all know how unlikely it is for anything absurd to happen.

Unless, of course, the strangeness has been validated by observation in movie theaters. (You can't disbelieve your own eyes, can you?) I would guess that the NYT can get away with this because The Matrix.

It seems to me to be completely arbitrary wish-fulfillment to believe it's more likely simulators would want us to be interesting than to be not intresting. If anything, it seems more likely that simulators would want us to try to rationally (bayesian until we develop something better) maximize our persistence odds, since that seems to be what gets best rewarded with persistence. Being an avante garde artist doesn't seem to mitigate the mortality risk of not wearing a seatbelt in our reality.

I don't think Robin's comment was wish fulfillment. Instead, it seemed to be motivated by thinking about what a simulator would find interesting. Wouldn't it be boring (from the simulator's perspective) if everyone were perfectly rational and Bayesian. Clearly, you believe that what you have brought up MANY times on this blog (maximizing persistence odds) is important, and you seem biased to believe that the simulator should also share your belief rather than consider motivations for creating the simulation in the first place.

Anon, I think you fail to address the last two sentences of my post in your reply. And I think they still serve as an effective rebuttal to your post.

"It seems to me to be completely arbitrary wish-fulfillment to believe it's more likely simulators would want us to be interesting than to be not intresting."
HA, why would simulations be conducted if not because the results were potentially interesting and useful to the simulators?

"If anything, it seems more likely that simulators would want us to try to rationally (bayesian until we develop something better) maximize our persistence odds, since that seems to be what gets best rewarded with persistence."
I have to agree with anon that you seem to not be engaging the question in favor of an odd sort of projection. If simulators wanted to produce efficient egoists, they could simply generate them. Creating a simulated world with the appearance of stable physical laws and empirical regularities, so that efforts to avoid local risks do not appear to be always counter-productive, follows from many more plausible motivations.

Consider simulations of primitive civilizations run for scientific research to study the likely nature and development of alien life. Evaluating the Fermi Paradox, the threat of alien intelligence with opposed values, and the expected value of haste in launching an interstellar colonization wave would all be very beneficial for a huge variety of ultra-advanced civilizations seeking to survive or execute other goals. But scientific simulations will aim for fidelity and predictive power, not reward and punishment.

However, to run more simulations and more cost-effectively explore the space it could be worthwhile to use simpler models of individuals that are less important to the development of the simulated civilization.

"Being an avante garde artist doesn't seem to mitigate the mortality risk of not wearing a seatbelt in our reality."
Again, this misses the point. The recommendations about how to live in simulation address the risk that you will no longer be simulated, at least in sufficient detail to sustain your subjective consciousness. In a partial simulation, where appearances are generated on demand (e.g. the detailed nanoscale structures of objects exist only when they are examined with sensitive microscopes, features of the stars are generated on the fly as astronomers observe them, etc) you can't conclude that beings continue to exist when you are not interacting with them from your sensory impressions: whenever you look, the relevant appearances will be generated.

Also, if entertainment simulations are the ones most likely to see substantial intervention by the simulators, we should take into account the likelihood of false memories and recent creation.

You are making an assumption as to what the simulator would desire from those in the simulation. Honestly, why would a simulator care about us maximizing our persistence odds... I suppose it partially depends on the cost of producing another simulation. But then if we assume there is negligible cost to producing another simulation, we have to almost certainly focus our attention on other possible desired attributes of those in the simulation.

I think Robin is also trying to say that people the simulators find sufficiently interesting may wake up in another simulation upon local 'death'. But that gets deep into Pascal's Wager territory, with the same problem of not being able to know with any confidence just what gets rewarded.

Zubon, I was being ironic about the boxes. And I don't know which "your comments" you mean.

Andrew, what do you think relevant scientists in "biochemistry, water transport, etc." would have had to say?

Eliezer, your "world becomes less strange" bias suggestion is interesting and worth exploring further.

Carl, I think you misunderstand the rather unambiguous meaning of my post, which wasn't arguing that simulators are rewarding anything, but rather that in our reality there seems to be more evidence that rational persistence maximizing behavior is being rewarded with persistence than arbitrarily "interesting" behavior (this is different than claiming that anything is actually being rewarded or encouraged by simulators).

Your disagreement with me in your post, in my opinion, seems to be tortured, leaning on speculation and unwarranted extrapolations (e.g. "If simulators wanted to produce efficient egoists, they could simply generate them.") and more about performing disagreement with me than detection of an "odd sort of projection".

Hopefully, the whole point of noticing that our world may be a simulation is to note that what actually gets rewarded may be quite different than what seems to be rewarded.

Robin, I agree with you on that. But to move from there to a suggestion that there's more evidence that what gets rewarded is to be more interesting than to be less interesting seems to me to be unwarranted. An admonission to people "be as interesting as possible, so as to keep our simulation from being turned off" seems to me to be worse advice than "let's solve collective action problems to minimize empirically determined existential risk" given our best currently available information and models of reality.

[I edited out a bunch of nuances and caveats to this post, expecting good faith interpretations of what I wrote. I hope people aren't going to in bad faith force me to add in 100 footnotes, such as reasonable interpretation of Robin's usage of "interesting", etc.]

Hopefully, did you read the paper in question? I argue there by analogy to what we know about what people in our world like about simulations.

Robin, interesting piece (no pun intended) and I enjoyed reading it. But I don't think it changes the assessment in my 7:42pm post.


I don't disagree (and did not perform such disagreement above) that it appears that trying to preserve one's life is more successful than not doing so, and that artists do not appear to enjoy any narrative protection. That's the baseline before we consider the Simulation Argument.

I was responding to this statement, which I believe to be mistaken:

"If anything, it seems more likely that simulators would want us to try to rationally (bayesian until we develop something better) maximize our persistence odds,"
In that comment and your reply to anon you mentioned the apparent benefits of efforts to survive as (non-conclusive) evidence about the motives of simulators. In other words, the probability of a world with no apparent simulator interventions on behalf of the 'interesting' is lower conditional on one set of simulator motives than on another, and we observe no such apparent interventions. I agree that this argument (the last two sentences of your initial comment) points to some evidence for the above statement, but that it is only very weak evidence and that the above statement is false, since:

1. The prior probability of simulations run for the human motivations Robin describes, or for predicting the nature of extraterrestrial civilizations as I discussed above, is far higher than for the idiosyncratic motivation of wanting simulated beings to maximize their persistence odds.

2. The probability of apparent interventions in favor of 'interesting' individuals, conditional on simulations being motivated by a desire to observe the outcomes of 'interesting' events, is not high. Likely motives for accurate simulations, and the possibility of false memories of a non-existent past without intervention, should both shift our estimates of that conditional probability.

I think your assigning of probabilities in #1 is so contestible on its own grounds that it feels like wasted effort on my part to do so.
As for #2, it sounds like you disagree with Robin's paper linked to in his 8:34pm post. I agree with your distinction between interesting individuals and interesting events, but I think your probability assessment is arbitrary here, too.

Here's two strong problems to the way I see you assigning probabilities:
1. Although I think a good case can be made that that there are more intrareality simulations than there are nonsimulation realities (given that we already have multiple simulations of our universe within our universe, and have the apparent capacity for many more), I don't think a good case can be made that there are more entertainment simulations than problem-solving simulations. If anything, the reverse seems more likely to be true.
2. It seems arbitrary (or even wrong-headed) to me to assign as "entertaining events" for the simulators those that we seem to find entertaining, or that we speculate an alien civilization would find entertaining, rather than what actually appears, as best as we can tell, to be rewarded with persistence. So the best evidence is that simulators find innovations in sanitation to be entertaining, even though entertainment media on this theme may not draw huge ratings here on Earth. That the "best evidence" may actually be "false memories of a non-existent past without intervention" is acknowledged, but that's a criticism that can be applied omnidirectionally. For example, beliefs about what we have found to be entertaining may be "false memories ..." too. So I don't thin it changes our probability estimates in any direction.

For "2." please replace "entertaining" with "interesting". throughout the paragraph.

and please replace "alien civilization" with "simulator".


I think the scientists in biology, water transport, etc., would doubt that a simulation could capture all the phenomena they see, ranging from photosynthesis to nanotechnology to the flow of arsenic in aquifers in Bangladesh. The usually understood scientific processes of physics, chemistry, evolution, etc., seem like a much more plausible description of what's happening.

It's similar to the argument against creationism: yes, a superbeing could have created the earth 6000 years ago, along with a complete fossil record, asteroids, meteorites, and people having appendixes--but it just seems a little silly.

Andrew, I don't know how hard it is for a sim to capture water flow and so on. But I'm pretty sure it wouldn't be hard to identify the scientists who were noticing a discrepancy between the sim and their scientific theories, reverse the sim there and rerun it with better fudged data that eliminate the discrepancy.

Which is why psychic powers always go away whenever a scientist tries to look at them!

The notion of hindsight bias is interesting, but it seems to me that for every person who has the feeling that history has progressed from strangeness to normality, there are those who feel quite the opposite and long for times past when "things made sense". Is this "nostalgia bias" perhaps?

Which is why psychic powers always go away whenever a scientist tries to look at them!

This statement speaks volumes about your complete unfamiliarity with the literature. You might want to read my short message about uninformed versus informed criticism of psi phenomena.

Chad, I bet those people still see history as going from strangeness to normality to strangeness, they just pick a different time at which to place 'normality'. (Well, unless their idea of normality is the Paleolithic.) The psychology of this is interesting.

Matthew C, you remind me of creationists who always insist that we just haven't read THIS particular tome promoting intelligent design or whatnot. Are there any peer-reviewed studies published in respected scientific journals you'd like to point out? Any phenomena reliably reproduced in front of skeptics? Because I doubt Eliezer is going to purchase Irreducible Minds before Atlas of Creation.

Robin: Oh good. I was debating between irony and whether you were thinking of people having R2D2s or Mother Boxes with them. I should have linked originally: here and here are postings that I meant.

Isn't the whole premise of the original paper a little wrong? He claims that the probability of our being in a simulation depends on whether or not humans will eventually be able to simulate full worlds. But the two are unrelated. It is not our future descendants or 'post-humans' who would be simulating us, it is someone in a completely different universe with different laws of physics.

Think of it like this - the fact that the little Sims in SimCity are simulated has nothing to do with their ability to create a simulation inside the game.

will perkins: The original paper shows that, according to our laws of physics, it's probable that we're living in a computer simulation. If it can be shown that our physics allow it, then it's a legitimate argument that we're likely to live in one - otherwise it's just a repetition of the millenia-old argument that it's possible for us to be living in a hallucination.

If one wants to make a plausible case for this, you have to start from our laws of physics and establish that it's likely according to them. Once you have already established that it's likely, then you can go around speculating about the physics in other worlds, because you've shown that there is at least one universe in which the argument holds (and the people simulating us might be living in a similar universe).

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30