## September 21, 2008

You have another inconsistency as well. As you should have noticed in the "How many" thread, the assumptions that lead you to believe that failures of the LHC are evidence that it would destroy Earth are the same ones that lead you to believe that annihilational threats are irrelevant (after all, if P(W|S) = P(W), then Bayes' rule leads to P(S|W) = P(S)).

Thus, given that you believe that failures are evidence of the LHC being dangerous, you shouldn't care. Unless you've changed to a new set of incorrect assumptions, of course.

Simon, anthropic probabilities are not necessarily the same probabilities you plug into the expected utility formula. When anthropic games are being played, it can be consistent to have a ~1 subjective probability of getting a cookie whether a coin comes up heads or tails, but you value the tails outcome twice as much. E.g., a computer duplicates you if the coin comes up tails, so two copies of you get cookies instead of one. Either way you expect to get a cookie, but in the second case, twice as much utility occurs from the standpoint of a third-party onlooker... at least under some assumptions.

I admit that, to the extent I believe in anthropics at all, I sometimes try to do a sum over the personal subjective probabilities of observers. This leads to paradoxes, but so does everything else I try when people are being copied (and possibly merged).

Regardless, the question of what we expect to see when the world-crusher is turned on, and how much utility we assign to that, are distinct at least conceptually.

And if turning on the LHC or other world-smasher causes other probabilities to behave oddly, we care a great deal even if we survive.

The World-Crusher. CERN should copyright that. In fact, I might buy the domain worldcrusher.com and have it redirect to the LHC site.

And even if it doesn't come up with any new physics, it's definitely proving to be worth its weight in thought experiments.

Your probability estimate of the LHC destroying the world is too small. Given that at least some phycisists have come up with vaguely plausible mechanisms for stable micro black hole creation, you should think about outrageous or outspoken claims made in the past by a small minority of scientists. How often has the majority view been overturned? I suspect that something like 1/1000 is a good rough guess for the probability of the LHC destroying us. This seems roughly consistent with the number of LHC failures that I would tolerate before I joined a pressure group to shut the thing down; e.g. 10 failures in a row, each occuring with probability 50%.

I suspect you just don't want to admit to yourself that the experiment is that risky: we'd be talking about an expected death toll of 6 million this spring. Yikes!

"if turning on the LHC or other world-smasher causes other probabilities to behave oddly"

How can it possibly do so, except in the plain old sense of causal interaction, which is emphatically not what this discussion is about?

Let's think about what observer selection effects actually involve.

Suppose that there is a sort of multiverse (whether it is a multiverse of actualities or just a multiverse of possibilities does not matter for this analysis). At some level it consists of elementary "events" or "states of affairs" which are connected to each other by elementary causal relations. At a slightly higher level these elementary entities form distinct "worlds" (whether these worlds are strictly causally disjoint, or do interact after all, does not matter for this analysis). At some intermediate level are most of the complex entities and events with which we are habitually concerned, such as the activation of the LHC and the destruction of the Earth.

Regarding these intermediate events, we can ask questions like, what is the relative frequency with which event B occurs in a world given that event A has occurred elsewhere in that world, or even, what is the relative frequency with which event B is causally downstream of event A, throughout the multiverse? (Whether the first question is always a form of the second, in a multiverse which is combinatorially exhaustive with respect to the elementary causal relations constituting the individual worlds, I'm not sure.)

So far, so straightforward. I could almost be talking about statistical analysis of a corpus of documents, rather than of an ensemble of worlds, so far.

Now what are observer selection effects about? Basically, we are making event A something like "the existence of an observer". When we condition on that, we find that some Bs become significantly more or less frequent, than they are when we just ask "how often does B happen, across the multiverse?".

But suppose my event A is something like, "the existence of an observer who reads a blog called Overcoming Bias and shares a world with a physics apparatus called the LHC". Well, so what? It's just another complicated state of affairs on that intermediate level, and it will shift the B-frequencies from their unconditioned values in some complicated way. Even if there are subjective duplicates and their multiplicities change in some strange way, as in an interacting many-worlds theory with splitting and merging... it's complicated, but it's not mysterious.

So finally, what is the scenario we are being asked to entertain? State of affairs A: The existence of an observer who shares a world with an LHC which repeatedly breaks down. And we are asked to estimate how this affects the probability of state of affairs B: An LHC which, if it worked, would destroy the Earth.

Well, let's look at it the other way around. State of affairs A': An LHC which, if it worked, would destroy the Earth. State of affairs B': An LHC which keeps malfunctioning whenever it is switched on.

From this angle, there is no question of anthropics, because we are just talking about a physics experiment. All else being equal, the fact that something is a bomb does not in any way make it less likely to explode.

If we then switch back to the original situation, we are in effect being asked this question: if a device keeps breaking down for unlikely reasons, does that make it more likely to be a bomb?

The sensible answer is certainly no. Now maybe someone can come up with a strange many-world physics, in which observer-duplicate multiplicities vary in such a way that the answer is yes, on account of observer selection effects. It would certainly be interesting to see such an argument. In fact I think this whole line of thought originates with the fallacy of conditioning on survival of observers rather than conditioning on existence of observers. (Even if you die, you existed, and that uses up the opportunity for anthropic reasoning in the classic form.) Nonetheless, some wacky form of observer-physics might exist in which this generic conclusion is true after all. But even if it could be found, you would then have to weigh up the probabilities that this, and only this, is the true physics of the multiverse. (And here we hit one of the truly fundamental problems here: how do we justify our ideas about the extent of the possible? But that's too big a question for this comment.) If only one little corner of the multiverse behaves in this way, then the answer to the question will still be no, because A and B will also occur elsewhere.

So, to sum up a long comment: This whole idea probably derives from a specific fallacy, and it should not be taken seriously unless someone can exhibit an observer-selection argument for a breakdowns-implies-lethality effect, and even then such an effect is probably contingent on a peculiar form of observer-physics.

roko,
Given that at least some phycisists have come up with vaguely plausible mechanisms for stable micro black hole creation, you should think about outrageous or outspoken claims made in the past by a small minority of scientists. How often has the majority view been overturned? I suspect that something like 1/1000 is a good rough guess for the probability of the LHC destroying us.
This reasoning gives the probability 1/1000 for any conceivable minority hypothesis, which is inconsistent. In general, I think this debate only illustrates the fact that people are not good at all in guessing extremely low or extremely high probabilities and usually end up in some sort of inconsistency.

Prase: "This reasoning gives the probability 1/1000 for any conceivable minority hypothesis, which is inconsistent."

Sure; for example if you applied this kind of "rough guestimate" reasoning to, say, 1001 mutually exclusive minority views, you would end up with a probability greater than 1. But I would not apply this reasoning in all cases: there may be some specific cases where I would modify the starting guess, for example if it led to inconsistency.

I think that this illustrates that it is hard to draw hard and fast rules for useful heuristics. I think that you'd agree that assigning a probability of 1/200 or 1/5000 to the hypothesis that the scientific community is mistaken about the safety of some particular process is a reasonable heuristic to go around with, even if overzealous application of such heuristics leads to inconsistencies. The answer, of course, is not to be overzealous.

And, of course, a better answer than the one I originally gave would be to look into the past history of major disasters that were predicted by some minority view within the scientific community, and get some actual numbers. How many times have a small group of outspoken doomsayers been proven right? How many times not? If I had the time I'd do it. Perhaps this would be a useful exercise for the FHI to undertake.

Basically, everyone knows that the probability of the LHC destroying the earth is greater than one in a million, but no one would do anything to stop the thing from running, for the same reason that no one would pay Pascal's Mugger. (My interests evidently haven't changed much!)

I like Roko's suggestion that we should look at how many doomsayers actually predicted a danger (and how *early*). We should also look at how many dangers occurred with no prediction at all (the Cameroon lake eruptions come to mind).

Overall, the human error rate is pretty high: http://panko.shidler.hawaii.edu/HumanErr/
Getting the error rate under 0.5% per statement/action seems very unlikely, unless one deliberately puts it into a system that *forces* several iterations of checking and correction (Panko's data suggests that error checking typically finds about 80% of the errors). For scientific papers/arguments one bad per thousand is probably conservative (My friend Mikael claimed the number of erroneous maths papers are far less than this level because of the peculiarities of the field, but I wonder how many orders of magnitude they can buy).

At least to me this seems to suggest that in the absence of any other evidence, assigning a prior probability much less than 1/1000 to any event we regard as extremely unlikely is overconfident. Of course, as soon as we have a bit of evidence (cosmic rays, knowledge of physics) we can start using smaller priors. But uninformative priors are always going to be odd and silly.

One reason I dislike many precautionary arguments is that they seem to undervalue what we learn by doing things. Very often in science, when we have chased down a new phenomenon, we detect it by relatively small effects before the effects get big enough to be dangerous. For potentially dangerous phenomena, what we learn by exploring around the edges of the pit can easily be more valuable than the risk we faced of inadvertently landing in the pit in some early step before we knew it was there. Among other things, what we learn from poking around the edges of the pit may protect us from stuff there that we didn't know about that was dangerous even if we didn't poke around the pit. One of the consequences of decades of focus on the physics of radiation and radioisotopes is that we understand hazards like radon poisoning better than before. One of the consequences of all of our recombinant DNA experimentation is that we understand risks of nature's own often-mindboggling recombinant DNA work much better than we did before.

The main examples that I can think of where the first thing you learn, when you tickle the tail enough to notice the tail exists, is that Tigers Exist And Completely Outclass You And Oops You Are Dead, involve (generalized) arms races of some sort. E.g., it was by blind luck that the Europeans started from the epidemiological cesspool side of the Atlantic. (Here the arms race is the microbiological/immunological one.) If history had been a little different, just discovering the possibility that diseases were wildly different on both sides could easily have coincided with losing 90+% of the European population. (And of course as it happened, the outcome was equally horrendous for the American population, but the American population wasn't in a position to apply the precautionary principle to prevent that.) So should the Europeans have used a precautionary principle? I think not. Even in a family of alternate histories where the Europeans always start from the clean side, in many alternate subhistories of that family, it is still better for the Europeans to explore the Atlantic, learn early about the problem, and prepare ways to cope with it. Thus, even in this case where the tiger really is incredibly dangerous, the precautionary principle doesn't look so good.

The problem with looking at how many doomsayers were successful in history is that it completely overlooks the concerned hypothesis itself. Doomsday prophecies are not all equally probable. If we constrain our attention to prophecies of the destruction of the whole Earth (which seems most relevant for this case), the rate of success is obviously 0.

@ prase: well, we have to get our information from somewhere... Sure, past predictions of minor disasters due to scientific error are not in exactly the same league as this particular prediction. But where else are we to look?

@anders: interesting. So presumably you think that the evidence from cosmic rays makes the probability of an LHC disaster much less than 1 in 1000? Actually, how likely do you think it is that the LHC will destroy the planet?

"Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/universe somehow, and what this revealed about my prior probability..."

"Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"... After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?"

If the LHC fails repeatedly it can only be because of logical engineering flaws. In fact, the complexity of the engineering makes it easier for people to attribute failures to unseen, unnatural forces.

If a marble rolling down an incline could destroy the universe, the unnaturalness of the failures could not be hidden. Any incline you approached with marble-y intent would crumble to dust. Or instead of rolling down an incline, the marble would hover in midair.

If the LHC is a machine based on know physics and mechanics then it would require causality-defying forces to stop it from working, just as it would take supernatural forces to stop anyone from simply rolling a marble down a slope.

And if this is the case, why should the LHC supernatural stop-gaps appear as they do -- as comprehensible engineering flaws? Why not something more unambiguously causality-defying like the LHC floating into the air and then disappearing into the void in an exciting flash of lights? Or why not something more efficient? Why should the machine even be built up to this point, only to be blocked by thousands of suspiciously impish last-minute flaws, when a million reasonable legislative, cooperative, or cognitive events could have snuffed the machine from ever even being considered in the first place?

More importantly, if the reasoning here is that some epic force puts the automatic smack-down on any kind of universe destroying event, then, obviously, repeated probability-defying failures of the LHC more logically reduces the probability that it will destroy the universe (by lending increasing support to the existence of this benign universe-preserving force). It doesn't increase the probability of destruction, by its own internal logic.

The argument for stopping the LHC then could only be economic, not self-preservational. So nuclear terrorism would actually be the worse time to start it up, since the energy and man-power resources would be needed more for pressing survival goals, then to operate a worthless machine the universe won't allow us to use.

Related: The wacky "science" of "Unusual Events" and "Mysterious Circumstances":

If an accelerator potentially existed that could generate a large number of Higgs particles and if the parameters were so that such an accelerator would indeed give a large positive contribution, then such a machine should practically never be realized!
We consider this to be an interesting example and weak experimental evidence for our model because the great Higgs-particle-producing accelerator SSC [17], in spite of the tunnel being a quarter built, was canceled by Congress! Such a cancellation after a huge investment is already in itself an unusual event that should not happen too often. We might take this event as experimental evidence for our model in which an accelerator with the luminosity and beam energy of the SSC will not be built (because in our model, SI will become too large, i.e., less negative, if such an accelerator was built) [17].
Since the LHC has a performance approaching the SSC, it suggests that also the LHC may be in danger of being closed under mysterious circumstances.

http://arxiv.org/abs/0802.2991
http://arxiv.org/pdf/0802.2991v2

I'm pretty sure that given the time to learn the calibration, you could make a million largely independent true predictions with a single error, and that having done so the unlikely statements would be less silly than "the LHC will destroy the world". Of course, "independent" is a weasel word. Almost any true observations won't be independent in some sense.

If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such... then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around."

I would feel such an urge but would override it just as I override the urge to see supernatural forces or the dark lords of the matrix in varied coincidences with similarly low individual probabilities. (I once rolled a die six 8 times out of nine, probability about 2 million. I once lost with a full house in poker, probability a bit under a hundred thousand to one)

I'm pretty sure that given the time to learn the calibration, you could make a million largely independent true predictions with a single error, and that having done so the unlikely statements would be less silly than "the LHC will destroy the world". Of course, "independent" is a weasel word. Almost any true observations won't be independent in some sense.

If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such... then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around."

I would feel such an urge but would override it just as I override the urge to see supernatural forces or the dark lords of the matrix in varied coincidences with similarly low individual probabilities. (I once rolled a die six 8 times out of nine, probability about 2 million. I once lost with a full house in poker, probability a bit under a hundred thousand to one)

The huge problem with any probabalistic estimates is the assumption that repeated failures of the LHC are independent (like coin tosses) and infrequent. Why, in an immensely complex bit of novel engineering, with a vast number of components interacting, would you assume that? How many varieties of filament failed when Edison was trying to build a light bulb? Thousands: but that did not prove that the carbon filament lightbulb was a danger to humanity, but that it was very difficult problem to solve. it's taken more than twenty years to build LHC: if it takes several years to get it working properly, would that be surprising?

@michael vassar

For the probability of a die coming up "6" eight times out of nine, I get about 1 in 200 thousand, not 1 in 2 million. If the die coming up anything (e.g., "1" or "3") eight times out of nine would have been similarly notable, I get 1 in 37 thousand.

Why do you override the urge to see dark lords of the martrix in these sorts of coincidences? Calculations of how many such coincidences one would expect, given confirmation bias etc.? Belief that coincidence-detectable dark lords of the matrix are sufficiently unlikely that such calculations aren't worth making? A desire not to look or be crazy?

1e-6? You claim to be rationalists who project an expected 6 million
deaths or a one in a million chance of losing all human potential forever. I assume this is enough motivation to gather some evidence, or at least read the official LHC safety report.

What overwhelming evidence or glaring flaws in their arguments leaves you convinced at 1e-6 that the LHC is an existential risk?

William makes a good point!
-----
One reason I dislike many precautionary arguments is that they seem to undervalue what we learn by doing things. Very often in science, when we have chased down a new phenomenon, we detect it by relatively small effects before the effects get big enough to be dangerous. For potentially dangerous phenomena, what we learn by exploring around the edges of the pit can easily be more valuable than the risk we faced of inadvertently landing in the pit in some early step before we knew it was there. Among other things, what we learn from poking around the edges of the pit may protect us from stuff there that we didn't know about that was dangerous even if we didn't poke around the pit.
-----

There would definitely be benefit to be had in working out what things cause universe destructions. For example, if someone created a form of engine that relied on high energy particles. If such a device had a component, the failure of which allowed universe destroying consequences to occur, the outcome would be quite significant. We would find that in practice the component _never failed_, _ever_. Yet, in the course of just years the Everett branches in which we lived would become sparse indeed!

How many LHC catastrophe's would be worth enduring for that sort of knowledge? I'll leave it to someone far more experienced than I to make that judgement.

The comments to this entry are closed.

## May 2009

Sun Mon Tue Wed Thu Fri Sat
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31