« I'll Be Different | Main | Past Hypothesis Taxonomy »

March 27, 2009


Luboš Motl has often written about this, in the context of chiding or attacking Sean for having an interest in "Boltzmann brains", e.g. here and here. His attitude seems to be that the past having a lower entropy than the future is the reason why the second law works, that the initial conditions will probably be explained by string theory eventually, but meanwhile we shouldn't be worried about saying that they were indeed low-entropy.

I'm not grasping something here. The probability of us observing our relatively low-entropy world today, given a high-entropy past, are very low indeed. The probability of these observations given a low-entropy past is vastly higher. The Second Law says nothing about what our priors for these different possibilities should be, but if your prior is Somolonoff-like, then a low-entropy past is a reasonable possibility. If you're making an argument that it should be reasonable to "predict the past" by running the Second Law backwards, I'm not seeing it; empirically we know that doesn't work, perhaps you could set out the argument why it should in more detail if you're making it?

Are you arguing that since in a closed system entropy is conserved due to Liouville's theorem, it cannot rise, and that it's scandalous when in standard rationalisation of statistical physics this apparent contradiction with the entropy-maximisation principle is not enough emphasised?

Mitchell, I can't imagine why one would have much confidence that string theory will explain it.

Paul, I saying that assuming a "low entropy past" is pretty much just assuming that your best model gets the past very wrong; it isn't a concrete alternative worthy of consideration. It isn't a solution; it is just a clever rewording of the fact that we don't have a solution.

prase, not even remotely close.

I'm a new reader, so I'm not sure if you're being hyperbolic for comic effect or if you're being serious. I wonder how you'd rank prior beliefs as to which field will be "overturned" first: thermodynamics or cosmology. The use of scare quotes around statistical mechanics stands out. Thermodynamics can tell you what will happen to a system not in equilibrium. Once the system has equilibrated its unobserved history is lost. If you find a glass of room temperature water on the counter you can't tell if it started out as ice or hot. If you find the water slightly cooler than room temperature you can rule out hot but cannot assume ice. Forgive me if this is too elementary but based on your post, I wonder. The Past Hypothesis seems to be nothing more than "unknown or unknowable initial conditions."

No worries. Second law is a mind bender fer sure.

Listen, have you guys read this paper that purports to connect the Principle of Least Action with the Second Law?


Separately, I will say that I believe that the second law might have something to do with how dissipative transfers change with different local geometries because of gravity. Watch water boiling in low gravity to understand why.


Is it that "strange" if physics fails to map states one-to-one over time? Certainly this seems to be true in the absence of gravity, but this thinking hasn't been made consistent with gravity. I personally find it very unattractive. Does it not imply that all the information about us was present at the beginning of the universe? To me that seems contrived.

I prefer to think, without any formal theory to back it up (yet), that randomness is introduced, perhaps as wavelengths endlessly expand beyond the local horizon during eternal inflation. I would think this solves the problem, because it does not take spectacularly low entropy to initiate eternal inflation, and yet it quickly create a universe dominated by low-entropy regions.

In fact you don't even need randomness, if horizons and Hawking radiation allow for quantum state copying. Consider performing a quantum experiment while falling into a black hole with it, or alternatively observing the Hawking radiation from the black hole. If the horizon in some way produces two versions of the same experiment, then de Sitter horizons can endlessly copy low-entropy states during eternal inflation, while high-entropy states do not benefit from this.

BTW, I thought that Carroll agreed any "problem" (other than initial conditions) is overcome if one simply assumes the early universe was not in thermal equilibrium -- which to me seems very plausible. Whether low-entropy initial conditions are OK or not is at this point a question of aesthetics. The only attempt I know of to explain the creation of a universe -- as half an instanton from "nothing" ala Vilenkin -- though admittedly extremely crude, speculative, and informal, does seem to provide low-entropy initial conditions.

Perhaps I misunderstand the tone of this blog. My last post did not intend to say that these questions have been answered, just that plausible outlines of parts of answers are out there, and physicists are working to understand this better. If Robin's point is to admit that while he thought all these questions had clear answers, they don't, then I must agree -- there are many questions that remain (and they are the deepest). My personal take is this pertains more to the creation of the universe and combining quantum theory and gravity, and less to what is traditionally called "thermodynamics."

"prase, not even remotely close"

So the point is that it is somehow unreasonable to suppose non-maximum entropy initial state, or something else? Can you be more technical about it? It is difficult for me to figure out what is the main statement (having read David's comment, I don't seem to be alone).

"What we have now are standard distributions (i.e., probability measures) over possible states of physical systems, distributions which do very well at predicting future system states. That is, if we condition these distributions on what we know about current system states, and then apply local physics dynamics to system states, we get excellent predictions about future states. We predict heat flows, temperatures, pressures, fluctuations, engines, refineries, etc., all with great precision."

By standard distributions you mean (grand-/micro-)canonical ensembles? By system states you mean microscopic or macroscopic states? Since on micro-level all stuff is time-reversion invariant, it seems that you mean the latter, but speaking about local dynamics associates rather with the former. I am even not sure whether these details would help me to understand.

David I'm not suggesting thermodynamics will be overturned. Our usual approach works well for predicting the future even though we have "unknown or unknowable final conditions." That it works terribly for the past says the problem is more than "unknown or unknowable initial conditions."

Mike, non one-to-one mapping is a vague hope, to toss on the large pile of other vague hopes that have not yet made much headway. "Not in thermal equilibrium" is another way to say "low entropy" which is another way to say "our usual approach fails terribly."

prase, "states" meant exact states, while "coarse states" described sets of such exact states.

Bayesians will probably love Lubos Motl's take on the Arrow of Time:


But the final conditions are not unknown. The final condition is thermal equilibrium. Thermodynamics is "failing terribly" at something it could never do. It would be easier to understand your critique if you took it out of the realm of origins of life or the universe. Preferably a non-relativistic, isolated system in which gravity or biology is not a big factor.

But, somehow, everyone appears to be ignoring the elephant in the room: that unlikely events might very well have happened in the evolution of our universe because there was a will that wanted them to happen that way?


Penrose was trying to explain the past hypothesis - can't say his explanation is very convincing, but it is there (Weyl flat begginings and ends to the universe - I actually understand that, and can explain it if people want). His explanation for the missing entropy is to do with black holes.

But this isn't really credible, without some non-time-revesible process tied to entropy in some way (just the weak force being non-time-reversible is probably not enough).

Retreat to the anthropic principle, alas?

You appear to be assuming time symmetry, which is known to be false. CP symmetry is broken, CPT symmetry holds under very broad assumptions, therefore T symmetry is broken. Therefore you should not expect similar results when time-reversing the laws.

Rolf: the universe is probably T-symmetric. The idea that CPT symmetry implies T asymmetry ignores the possibility of charge and parity being cyclic phenomena, which reverse themselves *automatically* when you reverse time. I explain this point here: http://finitenature.com/cpt/

Re: "no one is even remotely" - Robin loves a long shot!

Physicists and cosmologists widely agree that there was a big bang. Robin may think he knows better - but I can't see why he thinks that, looking at this post.

Re: VILLE R. I. KAILA AND ARTO ANNILA - how you can write a paper like that and *not* mention Roderick Dewar is beyond me. Do the authors really not know?

Course states are pretty clearly in the map, not in the territory.
This post seems to me to be an assertion that it's still not clear why the map isn't primarily in Boltzman brains rather than being where it empirically primarily is.

Robin, I think I just heard your strongest proof for the existence of God yet.

Michael, Lubos Motl is seriously confused.

David, there may well be types of particles which don't interact enough as their density falls to ever come into equilibrium.

denis, that is another vague hope that hasn't been taken very far.

Stuart, yes I recall Penrose's flat past hypothesis; is it still considered viable?

Rolf, CPT is enough to make this a huge puzzle/problem.

Tim, I did not deny a big bang.

Michael, the puzzle can be expressed without reference to coarse states.

Robin: Well, the Liouville's Theorem stuff "starting from a low entropy state, we should expect to see increasing entropy", and we can then fully admit to saying "but we're still confused on how the fluff we had a low entropy state in the first place to start with!"

I'm unsure, but Barbour's timeless model _may_ help with this by basically forcing there to be a special unique low entropy chunk of configuration space in one "corner". That is, seems a natural consequence of it is that, so... And of course, in Barbour's model, you don't need to make any assumptions like "the low entropy bit was in the past", because you've got rid of time in the first place. So, keeping causality while loosing time while having a natural low entropy zone of the configuration space...

It occurs to me that nobody has actually done the experiment of T-reversing the whole universe; is it so obvious that it would lead to a low-entropy state? T-reversing requires, among other things, that you reverse the momentum of every particle, which requires you to know the momentum of every particle, which is not possible. Perhaps we should indeed predict a high-entropy state for the past, in the absence of other information. But we do have other information; is it really a good idea to apply a statistical law like 2ndT to a single case, the whole Universe? The sort of time-reversal that leads to a low-entropy state, in this view, is equivalent to adding a whole lot of otherwise unknown information to the system. Then it's not surprising that you get an effect you don't see in isolated systems.

Robin, thanks -- sincere mea culpas are few and far between! Not that any sort of apology was really necessary.

The actual situation is pretty straightforward: standard stat mech explains everything we see very well so long as you assume a past hypothesis, whose origin is clearly a matter for cosmology, not for stat mech itself.

But two things are legitimately scandalous. First, most textbook/intro treatments of stat mech don't explain the need for a past hypothesis, which is somewhat inexcusable. Second, research-level cosmologists don't admit that explaining the low-entropy past should be very high on their list of duties.

But we're trying to change all that!

You did describe the "past hypothesis" as a "vague hope".

The "past hypothesis" is the hypothesis of a low-entropy early state. The big bang also hypothesises a low-entropy early state. Are you splitting hairs between describing the "past hypothesis" as a "vague hope" and "denying the big bang"? That seems ridiculous - but how else can one interpret these comments?

Paul Crowley, you wrote,

The probability of us observing our relatively low-entropy world today, given a high-entropy past, are very low indeed. The probability of these observations given a low-entropy past is vastly higher.

In this post, Sean Carroll wrote,

If all pasts consistent with our current macrostate are equally likely, there are many more in which the past was a chaotic mess, in which a vast conspiracy gave rise to our false impression that the past was orderly.

So it seems that there is a direct contradiction here, at least for probability distributions uniform over microstates.

Robin, pardon me, but now you're postulating new types of particles? Isn't it a truism that in the limit where particle interaction goes to zero thermodynamics breaks down?

Re: But two things are legitimately scandalous. First, most textbook/intro treatments of stat mech don't explain the need for a past hypothesis, which is somewhat inexcusable.

That is usually regarded as a fact - not a hypothesis - because of the strength of the supporting evidence.

Re: Second, research-level cosmologists don't admit that explaining the low-entropy past should be very high on their list of duties.

What caused the low-entropy state at the start of the observed universe is sometimes regarded as being outside the realm of scientific enquiry. However, there have been a number of speculative attempts at the problem - usually claiming that it was not, in fact, the beginning - e.g. the "Big Collision" theory.

I think these are pretty well known, though not very high-priority. It is hard to prioritise highly ideas that are so speculative and difficult to test.

Oh God, not again. There is nothing new here, and most physicists in the field think Boltzmann Brains is fringe stuff, for very obvious and easy reasons:

1)No one knows how to really count these states and sum them properly. In particular, no one really knows how to work gravity and entropy together yet, and no one really knows what it means to make a statement about entropy in our particular universe.

2)No one knows exactly how contingent certain outcomes are upon others - the usual example is evolution.

3)No one really knows what counts as an 'observer', or how to rate or quantify their perceptions.

4)The priors are not clearly stated.

So there is nothing 'clear' or 'obvious' about the argument that BB's must vastly outnumber the regular kind. Quite the contrary.

This is all bog-standard stuff, btw. BB's may sound kinda-sorta cutting edge, but they aren't, really, until the problems above are addressed a little bit better than they have been.

Otherwise, it's like arguing that the universe has to be three-dimensional for what are essentially anthropic reasons, only to discover that there are good dynamical reasons for three large spatial dimensions to be preferred. No one really wants to be making those sorts of mistakes.

What caused the low-entropy state at the start of the observed universe is sometimes regarded as being outside the realm of scientific enquiry. However, there have been a number of speculative attempts at the problem - usually claiming that it was not, in fact, the beginning - e.g. the "Big Collision" theory.

I think these are pretty well known, though not very high-priority. It is hard to prioritise highly ideas that are so speculative and difficult to test.

Posted by: Tim Tyler

Right on all counts. Understand, no one really believes that the universe came to be because of a 'statistical fluctuation'. But they don't disbelieve it either. There's simply no way to tell at this point. It could well turn out that there are excellent reasons to disbelieve a Boltzmann Beginning. But Boltzmann Brains aren't one of them. At least not yet.

Sean, the "past hypothesis" seems to me more of restatement of our failure than an actual solution. And I'd be more comfortable accepting it if I saw some formal calculations showing what it implied; the hand-waving makes me nervous.

Scent, I said nothing about Boltzmann brains.

?!? I'm not connecting up A to B here with this comment, Robin. I'm just reading what people are writing and clicking links. Like the post you linked to about BB's.

I'm confused. If we believe the second law of thermodynamics, that entropy is increasing, then of course it's true for any non-equilibrium situation that our thermodynamic probability distributions will not "predict" the past accurately? Isn't that exactly what we mean when we say dS/dt>0, that time evolution is (psuedo-)surjective with respect to macrostates? I don't get why adding in a initial low entropy boundary condition is cheating--it doesn't solve the problem of times arrow, but it lets us make thermodynamics work on our universe.

Is it that our assumption that we started from a low entropy state is corrupted by the fact that the empirical data which justifies this assumption might be the product of a Boltzmann brain-esque situation, something like what Cyan quotes Carroll saying @ 5:09?

(Disclaimer: I have not read the old postings by Hansson and Yudkowsky that this post refers to, so I just hope that this small comment is not too much out of context.) Is there really a problem here? A basic assumption of statistical mechanics is that the universe started out in a low entropy state. With that assumption it gives predictions that agree with observations. Without it it does not. That the universe actually started out in a non-thermal equilibrium remains to be explained. There is nothing remarceble by that, there exist a lot of as-yet unsolved scientific questions, this one just happens to one of them. And I agree with Mike above that this is more an unsolved problem in cosmology, quantum gravity and alike rather than in statistical mechanics (or even less in classical thermodynamics). (Of course those divisions into different subjects are just conventional and not in nature itself, but practical in the discussion.) A rather popular (and, I think, clear) discussion related to this:

Is it that our assumption that we started from a low entropy state is corrupted by the fact that the empirical data which justifies this assumption might be the product of a Boltzmann brain-esque situation, something like what Cyan quotes Carroll saying @ 5:09?

The argument, roughly, is that if the universe arose because the initial low entropy state was the result of a random fluctuation, the overwhelming probability is that we would see something else. That we don't is taken to be an argument against the 'random fluctuation' hypothesis. The problem is that that the conditional is by long odds - according to some fairly high-power people - not true, or at least, not justified. In fact, no one really knows what the most probable universe that arose from a 'quantum fluctuation' would really look like because we don't have a good way of calculating these sorts of probabilities. Not at this point, at least.

Yes, yes, the existence of the universe has been demonstrated to be highly improbable. Now, I vote we worry about something else in the short time we have before we all vanish in a puff of logic.

@Robin Hansen

I agree that Motl is confused. But he's interesting!

What did you not like about his explanation? I have to admit that the Bayesian perspective is difficult for me. I came up as a frequentist studying quantum mechanics from Sakurai.

Why are people so resistant to the idea that gravity might have something to do with the arrow of time? Come on, these are the two biggest pieces of the puzzle that aren't fitted together.

I'm kidding. Sort of.

Eliezer was wrong

Needless to say Robin, I’m very pleased indeed. Let me just recap a few of my own comments from August 2008, for readers looking for a truly radical hypothesis:

(Open Thread, Aug 2008)

“There was insufficient discussion of Occam’s razor:
(1) At the beginning of time the universe was in the simplest possible state (minimal entropy density). Why?”

(Self-Indication Solves Time-Asymmetry, Aug 2008)

“…the time asymmetry can only be explained by....universal terminal values...built into the structure of the universe….”

“…Occam's razor only works because for every knowledge domain there are associated *aesthetic principles*…”

“….very closely associated with the creation of beauty”

Given my current perspective on thermodynamics (which has changed somewhat since I last wrote) the low-entropy initial condition looks like it might be something of a red herring for explaining time's arrow. Judea Pearl's conditions for identifying a direction of causality in terms of the direction in which there are not unexplained correlations appearing, seems more fundamental than Liouville's Theorem or the Second Law. You can view the Second Law as a consequence of Pearl's direction of time plus Liouville's Theorem, in the sense that, if Pearl's rule against unexplained correlations appearing when moving in the direction regarded as "forward", were to be repealed, then even given Liouville's Theorem there'd be nothing wrong with expecting water to turn into ice cubes and steam. It would just be an unexplained correlation that appeared while moving forward in time. Similarly, you could have a game-of-Life universe in which there was no analogue of Liouville's Theorem and multiple initial states could map onto the same end state, the equivalent of water turning naturally into ice cubes and electricity; but if this world obeyed Pearl's laws governing the direction of the causality, you would still have records of the past and the perception of time moving in a particular direction.

The proposed link between Occam’s razor and the low-entropy start conditions is probably meaningless.

High thermodynamic entropy start conditions - looking similar to the heat death - could probably be specified extremely compactly. Similarly, a PRNG can produce an awful lot of what looks like noise with an extremely small internal state.

" a direction of causality in terms of the direction in which there are not unexplained correlations appearing"

Eliezer, could you provide a link explaining this definition further? From where I stand, this definition is circular. "Unexplained" requires an account of causality to function properly, so you are smuggling in the term that you want to define.

Eliezer, locally unexplained correlations sound to me a lot like what you'd get from a distant low-entropy hypothesis; I look forward to hearing where your thoughts end up when you are ready to share them.

Eliezer, I've just been reading Gary Drescher's Good and Real. His interpretation of an Arrow of Time arising within a time-symmetric system matches what you've described of Pearl's (although Drescher builds the understanding using case studies of a simple 2-D colliding balls model, where it seems that Pearl takes a more statistical approach).

Drescher talks about "...how the apparent arrow of time depends on the lack of coordination within the initial state, and depends on the developing correlations within subsequent states..."

For example, we might think we know about the past via our memories and records, but this standard approach says our records are far more likely to result from random fluctuations than to actually be records of what we think they are.

Even on rereading this makes no sense to me.

All he's saying, James, is that according to the calculations of some people, it is more likely for the book you are reading to have condensed out of intergalactic space through completely random fluctuations than it is to be the result of some sort of evolutionary process starting far back in time, say, the coalescing of a protostellar cloud into our present-day Sun and planets.

To say that these calculations are . . . questioned would be an understatement. The current consensus is that there is no way to make these sorts of calculations given the state of the art.

@Cyan: there's no contradiction between what I said and what Sean said. We have many more high-entropy predecessor states than low-entropy because there are so many more high-entropy states.

As I say, if your prior is the Solomonoff prior, there's a huge weighting in favour of the low-entropy predecessor state.

We have many more high-entropy predecessor states than low-entropy because there are so many more high-entropy states.

This statement doesn't seem to make any sense, on the face of it. Perhaps a specific example would help. I get the impression that there's some confusion between what is commonly taught as thermodynamics, and to what system the laws of thermo are being applied to.

Couple of questions:

  • When you say thermodynamics is bad at projecting into the past, you're only talking about the origin of the universe, right? You're not saying we get the wrong results when talking about teakettles.
  • When you say "the universe long ago had very low entropy", do you mean that the distribution of possible past states, conditioned on the present, is a nearly-flat distribution?
It sounds to me like you're just saying that (it's harder to predict the past than to predict the future) = (entropy increases).

Or is the difficulty that we don't have priors on initial states? You might be saying that the set of starting states that we consider reasonable, is only a small subset of the set of possible starting states. There can only be a set of starting states that we consider reasonable, if we are secretly assigning priors to starting states. We don't enter these priors on start states into the equations, so the output doesn't reflect them.

@Paul: Did you read my last comment? Informational entropy is one thing, and thermodynamic entropy is another. It may be quite possible to specify a heat-death-like state highly concisely if you are using a Turing machine for the specification.

Phil, there's a lot of handwaving going on, as you suggest, and all sorts of appeals to a false analogy. Let me quote:

I'm pointing out the difficulty of what counts as 'improbable'. For example, if you look at just the egg, and a sample of dust and gas containing all the constituent atoms needed to make the egg, say, a sample containing 10^30 atoms in a volume of 10^30 cm^3, it's very improbable that anything even remotely resembling an egg in any state will form, and then still much, much more improbable that a perfect, whole egg will form. A Boltzmann's Egg vs. a Universal Egg if you will.

But suppose that instead of 10^30 atoms, you have 10^60 atoms (in 10^60
cm^3) to play
with. Naively, the formation of a Boltzmann egg is still vanishingly
improbable, the Universal egg much more so. However, what really happens
is that under the effects of gravitation this nebulae of gas and dust
condenses to form a sun with planets, life evolves, and so - practically
in the blink of an eye as these time scales are reckoned - a perfect egg
is formed. Under this scenario, the probability of forming a perfect
unbroken egg is far, far higher than the probability of an imperfect,
broken egg assembled by pure chance.

So how much more improbable is the formation of a Universal egg as opposed
to a Boltzmann's egg?

That's the problem here. It's not at all clear how to count states to
perform the notional summing, no one really has a handle on what entropy
means in a universe with certain types of gravity and certain geometries,
or even for that matter on all the requisite basic physical laws. Without
this knowledge, no one knows how contingent certain events are in these
ensembles verses other events. Are they as strongly contingent as the
formation of water molecules in a box filled with oxygen and hydrogen? As
strongly contingent as the evolution example given above? And so on and
so forth. Indeed, most physicists say that we're nowhere near close
enough to having enough of a handle on these problems to make any sorts of
claims about things like the relative probabilities of Boltzmann's Brains
forming (I suppose that if you want to talk about a very restricted
universe, one that is open, completely flat, has no gravity, and can be
assumed to have a 'statistically even' distribution of matter one might be
able to make some sort of halfway plausible statement. That's not the
universe we live in, however.)

This is all well known inside the physics community, and for these
reasons, most researchers discount the Boltzmann Brain argument. Those who
don’t are considered outside of the mainstream. Note that no one is
talking about whether or not the universe arose from a 'statistical
fluctuation' per se. Most people in fact would be unsurprised to find out
that our universe didn't start out the way it did because of one of these
problematic events. It's just that you can't use this kind of argument
(at least, not yet) to discount the hypothesis.

I hope that clears things up.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30