## How was this number chosen?

February 12, 2010 by dberenstein

Today’s puzzle is really simple. It is a single number, and don’t worry about the formating: it is not essential.

Your job, if you decide to take it, is to figure out how this number was chosen.

3983166922118810678205990336564718434224120605664143701183608

7681419052507877828777197836786790614849623650815370930359235

0129704516432578314074470098769495753238915840085811644293357

6455898753759925348456032608650278150327110180834816760596303

09728652360528042842943610644525298135991037793081877485511245

2474332249873412259136132368731917415053983296042291396691439150

43327960030646869030670768248956738775708735485960504256334453

736488021205358726905569574806264667486581193984051009097136483

506525271749484080035733863870881763567000457650973784539927240

2043442156827044195038802613229625783142492845706197046207004041

55134209534719408986385965928333597344225594042950352896

### Like this:

Like Loading...

*Related*

on February 12, 2010 at 9:09 pmSteveThe number of people Lubos has insulted.

on February 12, 2010 at 10:50 pmLuboš MotlIt’s 360 factorial without the zeroes at the end. If the solution were what the idiot Steve above me wrote, it would have to end with 897 rather than 896.

on February 13, 2010 at 3:04 amSteveGood one, Lubos.

It also is the factor by which you are more deluded than the average person, and also the number of people you think are out to get you.

on February 13, 2010 at 3:16 pmdberensteinOk Steve,

That’s enough. The first time can be brushed off, but now it seems you should cool off a bit. The internet has this ability of preventing people from knowing if you are joking or getting serious.

on February 13, 2010 at 3:19 amRobbie ClarkenLubos – May I ask how you got it? I wouldn’t know where to start (apart from the encyclopedia of integer sequences which returned no results).

on February 13, 2010 at 4:28 pmLuboš MotlDear David, are you interested in the algorithm I used? How would you approach the same problem if you were not the mastermind?😉

on February 13, 2010 at 5:19 pmdberensteinHi Lubos:

Yes I am interested.

Usually if I get a weird number I first would use prime factorization to see if anything comes up and try to see a pattern in that (plot how many times each p appears).

This will tell you quickly if you have a power of a number.

on February 13, 2010 at 6:15 pmLuboš MotlDear Robbie, David would be one step ahead of me – but it’s not too surprising that he uses the same priorities when he’s solving tasks as for designing of the tasks.😉

Without knowing about his “creative strategy”, I had to do a couple of useless steps. First, I counted the number of digits 0-9, checking whether there’s some bias. But all of them took 10% of the digits, roughly with the expected errors, so there was no bias in the noise. It was a genuine real number or integer – and it was special because of its actual value, not because of the individual digits.

I was thinking about the counting of genus-g curves in some complicated Calabi-Yau spaces, but believing that David is not so evil (and moreover, he wouldn’t be able to calculate big numbers like that, at least not without someone like Marcos Marino), I had to find a better way.

“96” at the end looked pretty “composite”.😉 So I ultimately decided to factorize the number, getting primes from 2 to 359 in the mix, with non-decreasing exponents. Believe me, this is how factorials look like! For 20 seconds, I was also shocked that “5” was completely missing in the factorization, before I realized that if there were both 2 and 5, there would have to be zeros at the end haha.

Because it ended with 359, I actually guessed correctly that it should be rounded, and 360! was my first hypothesis to check. And it instantly worked.😉 It sometimes happens that the right answer is “attractive” in this sense, so you’re more likely to guess it before others, as soon as you converge close enough to the truth.

I am convinced that in this respect, Nature works in a similar way as David. The truth is pretty for those who develop some sense for the beauty of the truth.😉

on February 14, 2010 at 9:02 amLuboš MotlNon-decreasing exponents should have been “non-increasing” exponents, of course. Higher primes appear with smaller exponents.😉

on February 14, 2010 at 1:22 amUncle AlHow can Luboš’ spectacular proficiency and profound

Weltanschauungsupport string theory? String theory is mathematically rigorous but provably unphysical. If the vacuum is demonstrably anisotropic for mass (not photons), if the Equivalence Principle is falsified, all bets are off. Intensely opposite geometric parity atomic mass distributions are single crystals in enantiomorphic space groups P3(1)21 versus P3(2)21 and P3(1) versus P3(2) – opposite shoes.Do opposite shoes vacuum free fall identically? (quartz or gamma-glycine)

Do opposite shoes have identical enthalpies of fusion to socks? (benzil)

Do Meissner-levitated opposite shoes spontaneously spin in opposite directions? (quartz)

Do molecular left-left, left-right, right-right paired propellers on a rigid shaft show spin population divergence versus time of day?

Luboš, David… theory cannot transcend itself – certainly not perturbative treatments. The late 1800’s whale oil crisis was solved with petroleum. Somebody must look.

on February 14, 2010 at 7:36 pmLuboš MotlDear Uncle Al,

both theoretically and empirically, the possibility that the equivalence principle is violated – by shoes from different feet or by any pair of two different objects – is highly constrained and it seems de facto impossible.

It’s great that you’re impressed by the possibility that pairs of objects that are mirror images to each other may behave differently. I am impressed by this possibility, too. And in fact, we know that this possibility is realized in Nature.

The weak interactions proceed in left-right asymmetric ways. For example, neutrinos are always left-handed while the anti-neutrinos are always right-handed (which violates both C and P). That influences the geometry of beta-decay strongly. In fact, left-handed neutrinos even behave differently than right-handed anti-neutrinos in the mirror (even the CP i.e. T symmetry is violated, although by an even smaller amount).

But this violation is small and always has to be linked to some chiral physics – e.g. two-component spinors describing neutrinos (or Chern-Simons forms, if you’re in other dimensions). Gravity is a force that can’t violate the left-right asymmetry.

Well, one could try to design theories that would do such a thing in 3+1 dimensions. They would have to treat the right-handed gravitons differently from the left-handed ones. This looks like an a priori plausible “discrimination” to do with the 2 physical polarizations.

However, it’s very hard to reconcile this thing with the equivalence principle that’s been tested: the equivalence principle leads one to derive the physical polarizations of the graviton from the whole metric tensor which is inherently left-right symmetric.

Yes, I did assume the equivalence principle. But it seems to hold with a huge accuracy, so the GR can’t be quite wrong. It can’t be wrong by too much, if you wish. And if you try to add some deformations to GR that would discriminate the left-handed and right-handed gravitons and/or other things, I think that you won’t find any solution – or any solution I can imagine.

Of course, it would be extremely exciting if you found such an asymmetry in freely falling left and right shoes. But you know, people haven’t found any difference in acceleration – up to accuracy 10^{-16} – between *any* pair of objects that were usually much more different than the left and right shoe. So if people know that even two “very different things” accelerate by the same acceleration, why would you think that two almost identical objects – left and right shoe – will have detectably different accelerations? It makes no sense.

The hypothetical (but so far unobserved) different accelerations are usually assumed to come from different accelerations of protons and neutrons, and materials may differ in the proportion of the neutrons and protons. But if you have a left and right shoe, they’re identical in these respects. The hope that you could experimentally see a violation of the equivalence principle for pairs of so incredibly similar objects such as a pair of shoes is pretty much zero.

Your chances are not much higher in the theory, either. The equivalence principle holds exactly in all the vacua of string theory – one can get the physical graviton modes by removing the unphysical modes by some gauge symmetries that can always be interpreted as diffeomorphisms in spacetime. This principle holds for all vacua, including those proverbial 10^{500} vacua. You know, some people say that there are many solutions. There may be many but many basic principles are exactly valid in all of them. The equivalence principle always holds.

You may leave string theory but you would still have to find something interesting about the new conjectured effect – which seems to be absent in Nature. It would have to explain something, or unify something, or something like that. Otherwise, by Occam’s razor, unnecessary new things shouldn’t be added unless it’s necessary. This is not observed, so why would you believe that the (necessarily tiny) effect exists?

I can’t “quite rigorously” prove that it’s impossible and that there won’t be any cool papers about it in the future. But one can’t prove such a thing in science – ever. If you want to promote a particular hypothesis, you should have positive reasons why it’s interesting, and I don’t see them.

Cheers

LM

on February 14, 2010 at 9:20 amLuboš MotlDavid,

your equally famous almost colleague Joe argues that the logical arrow of time – the fact that the past can be remembered but not the future, i.e. that the future is always at least as uncertain as the past (because it will evolve from the past or present) – proves, implies, or requires the anthropic principle (i.e. the existence of humans).

Are you buying these things?

I think that people have been losing their minds for a couple of years. Of course, the anthropic principle began to eat people’s minds a decade ago but I still can’t believe that even famous people are unwilling to admit that, to say it extremely modestly, the anthropic principle can’t be derived from any empirically established insights of physics. It’s plausible that the anthropic reasoning is a part of the truth but I find it spectacularly obvious that even if this were the case, no evidence can exist today because it is almost a transcendental assertion. Science has worked without the anthropic principle for centuries and for a good reason: it doesn’t need it (at least not now).

Cheers

LM

on February 15, 2010 at 12:34 amdberensteinHi Lubos:

I’m not buying these things. I don’t like anthropic reasoning, I think it is defeatist. But then again, I don’t have a better alternative (yet) for solving the cosmological constant than having a landscape. Within that setup, we only require one possible universe with the right characteristics and some mechanism for getting there.

Cheers.

D.

on February 15, 2010 at 8:12 amLuboš MotlThanks for your answer. Of course, I haven’t convincingly calculated the C.C., either.😉

on February 16, 2010 at 8:27 amJoe PolchinskiHi Lubos,

You have completely misquoted me. I have said that _your_ statement motivating the logical arrow of time, “No world with “observers” can exist without it”, is manifestly anthropic reasoning.

If the universe began in a typical state, so that subsequent evolution was just fluctuations around equilibrium, then there would be no arrow of time. So you can’t derive it from pure logic.

Best,

Joe

on February 16, 2010 at 8:40 amJoe PolchinskiJust to amplify, if we are trying to answer the question, `why does our universe have property X?’ (where X could be either a small c.c. or an arrow of time) the answer `because worlds with observers can’t exist without it’ is anthropic.

on February 14, 2010 at 11:51 amGiotisI don’t understand why the multiverse reasoning is connected to the anthropic principle and why the multiverse proponents accept and adopt this term. As I see it the reasoning based on the string landscape argues that everything that could happen happened and thus since the vacua of our universe is an allowed vacua of the string landscape it just happened. The fact that human life exists in this universe is irrelevant and not essential.

Where is the anthropic principle in this argument? Nowhere.

On the contrary the anthropic principle states that the universe is parameterized in a specific way in order to allow humans or observers to exist. That’s completely different.

on February 14, 2010 at 7:16 pmLuboš MotlDear Giotis,

if David has time, he will surely give you a better answer, but here’s mine.

What you say is a priori right: there is no link between the multiverse and the anthropic principle a priori. However, when you look at the physical consequences of a multiverse, those that can’t be derived purely from our single Universe that began 13.7 billion years ago, you will find out that without looking at the observers, there aren’t any physical consequences.

Our Universe began in some state 13.7 billion years ago, it was very small, and its basic features were determined by its being small. If you want to know more – e.g. the initial choice of the point on the landscape (shape of the Calabi-Yau space etc.), or the height of the inflaton, or anything like that – you need to know something about the parent Universe in the multiverse, too.

To do so, you need to know who our parent Universe could be, and where it could have given birth to ours, and so on. Such hypotheses are probabilistic in character – it’s hard to hope for more specific ones. In other words, you need some measure that will tell you where (in the multiverse, as well as the landscape – the set of possible vacua) Universes that could become ours are likely to be born etc. It’s of course difficult to design any such measure in any objective way, and it’s likely that this problem has no sensible rational solution.

But those who think that this is a reasonable problem to tell us something about our Universe always assume that a Universe that could become ours is much more likely to be born at places which lead to big bubbles with lots of observers – they inflate the probability measure by the factor of the number of observers, and similar stuff. We’re the typical observers, they say, so we’re likely to be in the vacua which naturally predict many observers. So in reality, all the probability measures that exist on the multiverse – which are the only reasons why someone would consider the multiverse (and the pre-Big-Bang pre-history of our Universe) to be physical – are always “tainted” by the considerations such as the number of stars, animals, people, intelligent people, and so on.

So the relationship is that who rejects the anthropic principle usually thinks that the pre-history of our Universe, before the Big Bang, can probably have no calculable yet verifiable consequences for physics. For example, I do believe it can’t. So while it’s conceivable (but not guaranteed) that it makes some sense to talk about the tunneling needed in eternal inflation etc., I think that such a mechanism can’t lead to any predictions because you can’t know where our bubble was born, anyway: this uncertainty is just the prior probability reparameterized in a different way. On the contrary, the people who believe that the multiverse is important in physics typically end with the anthropic bias in their measures, and those who want the anthropic bias in their measure usually choose the multiverse as the most specific realization of the multiverse ideas.

Best

Lumo

on February 15, 2010 at 12:45 amJust LearningThe arguments surrounding the multiverse always revolve around the same basic principles. Are they dependent, independent or exclusive (e.g. the basic options available in probability theory). If they are dependent, then we really haven’t established anything more than the fact that the universe is bigger than we think; if they are exclusive, then we have established that our observable universe is the only one that matters, if they are independent, then any relationship that can be established is purely random and inherently meaningless.

Everett’s interpretation would seem to favor the exclusive approach.

on February 15, 2010 at 4:34 pmJust LearningJust to further clarify…Curvature must ultimately be related to interaction, which is related to events and information.

e.g. no interaction = no events = no information

Everett does a pretty good introduction…

“In fact, to any arbitrary choice of state for one subsystem there will correspond a relative state for the other subsystem, which will generally be dependent upon the choice of state the first subsystem, so that the state of one subsystem is not independent, but correlated to the state of the remaining subsystem. Such correlations between systems arise from interaction of the systems, and from our point of view all measurement and observation processes are to be regarded simply as interactions between observer and object-system which produce strong correlations.”

One can quickly conclude that a view of a multiverse where the individual “universes” interact in any meaningful way produces a clear inconsistency with MWI. Interacting universes must be incorporated as subsets of the Everett’s universal wave function.

on February 14, 2010 at 6:12 pmCoffeeCupContrailsD,

Could you explain the solution to this problem?

Am an Engineer, not mathematician, student still, not yet a professional, but am very interested in understanding theories that these solutions are based on.

Or if you could point me in the right direction.

Thanks.

on February 15, 2010 at 1:04 amdberensteinHi CCC:

You have to know a bit from where the problem came from to understand that this problem was not conceived to be hard to solve. I’ve been playing with python and made a function to calculate factorials. I was seeing how the arithmetic engine was working, since it admits arbitrarily long integers.

Then I calculated 360! and got a pretty long number. So I thought it would be cool to pose it as a problem, to see how long it would take people to solve it. But the trailing zeros made it a bit too obvious, so I deleted them (knowing full well that the problem would not be too much harder if I did that). I was still amazed that Lubos solved it so quickly.

Now the issue after that is problem solving strategy. If I would have said that the above is a sequence of digits, that would be different than saying it is a number. If I say it is a number, then there was probably an algorithm to make it.

Here, you can assume that since this is a big number, I made it by multiplying other smaller numbers. One of the ways to find out if a number is made by multiplying other numbers is to factor it into primes. Mathematica has a command that will let you figure that out.

You will notice that there are a lot of small primes, but no big prime if you do that. Also, the biggest primes only appear once and the small primes appear a lot, except 5. At this stage one has to make a guess.

Because the larger primes in the set of prime factors only appear once, it makes sense to try a factorial: that is a product of all integers less than some number. Primes that are bigger than half the number can only appear once in such a product, so given the information of the factorization

one can start trying these numbers.

The larger prime appearing is 359, so it makes sense to start at 359!, then 360!, etc. If you look at these, you see that they are trailed by zeroes (lots of them), so you can just truncate them (which is dividing by a power of 10, and that would wipe all the factors of 5).

But that is a guess, one tries it and the problem is solved.

But science is many times like that. One needs to do a leap of faith on some information and it can come out ok, or badly (you need a new idea basically).

The only theory that this problem is based on is some basic number theory and you have to know what a factorial is (they show up in a lot of combinatorial problems, so it is a well known function). Basic number theory is just various properties of the integers: multiplication,unique factorization into primes and some other functions associated to these properties. Problems of multiplication are solved by unique factorization into primes. Because I divided by a power of ten, the basic problem has not changed: the primes will be the same, except for the powers of 5 and 2.

on February 14, 2010 at 11:49 pmUncle AlDear Luboš,

I agree with your exposition! Every measurable property physics can imagine has been tested, benchtop to PSR J1903+0327. Eotvos experiments that never worked and cannot work are vigorously pursued. Gravitation is geometry. A parity Eotvos experiment (quartz or gamma-glycine) employs an unmeasurable property as a geometric test of gravitation. If it has a net non-zero output, theory was wrong.

You say the parity Eotvos experiment in quartz should not be run because it should not work. Look what should work and doesn’t. I cannot imagine a stronger reason to run a parity EP test. Once. The worst it can do is succeed.

you should have positive reasons why it’s interesting, and I don’t see them.http://www.mazepath.com/uncleal/parity.htm

We now have the geometic parity divergence of atom positions in a ball of quartz crystal lattice calculated in 0.001 A radius increments. Seven million points. The y = -2x + b fit, arising from one angle in the formula unit, appears to be exact.

on February 16, 2010 at 7:12 pmLuboš MotlDear Joe,

“Just to amplify, if we are trying to answer the question, `why does our universe have property X?’ (where X could be either a small c.c. or an arrow of time) the answer `because worlds with observers can’t exist without it’ is anthropic.”

this statement of yours is right or wrong depending on whether the existence of observers is a sufficient condition for the argument to apply, or not. As you wrote it, the statement (definition of the word “anthropic”) is wrong if taken literally. Sometimes, I feel that this simple confusion – whether the observers are actually needed for an argument or not – is being propagated deliberately.

To see a simple example: The worlds with observers surely can’t exist unless the probabilities sum up to one – the words with observers can’t exist without unitarity because unitarity is really a universal consistency condition – but that doesn’t mean that unitarity is an anthropic argument.

It would only be an anthropic argument if the existence of the observers were needed for unitarity to be derived as a necessary condition. It’s clearly not the case: unitarity has to hold in worlds without observers, too.

“The arrows of time” are completely analogous to “unitarity” in this argument. The existence of the arrows of time surely don’t depend on the existence of observers, so it can’t be anthropic. If there were no observers, no one could do physics and construct correct calculations concerned with the increase of entropy, or any patterns or causal relationships between events in the past and future, for that matter.

But that doesn’t mean that the laws of physics – and the laws for the right calculation of probabilities – wouldn’t hold. They would still hold but they would be useless for “everyone” because “no one” would exist.😉

I can’t believe that you’re still avoiding any particular test example for these arrow-of-time considerations. I asked you about that and you have never answered – because no answer compatible with your general “arrows are anthropic” thoughts can possibly exist.

If one calculates the probabilities for a scattering process, with uncertain spins in the initial and the final states, one must sum over the final polarizations, but average over the the initial polarizations (possibly with nontrivial weights which are unknown). Agreed? This process is past-future asymmetric, agreed? This formula prefers the number of macroscopically indistinguishable microstates in the final state to exceed the same number in the initial state, agreed? It’s really the master example explaining why the entropy grows.

The calculation above doesn’t depend on the existence of observers, does it? I have calculated a scattering of two electrons. If you think it does depend on the observers, could you tell me how any calculation in physics could be done without that and how any feature about the observers has actually affected the formula (sum over final, average over initial)? Where do you think that the past-future asymmetry (the logical arrow of time affecting the formula) comes from if it wasn’t there to start with? How did the observers get into it?

Thanks in advance for your answer.

You may have been confused by my usage of the term “memory of the past” in my comments about the asymmetry. But this term “memory” was only used because I used a user-friendly of the argument optimized for worlds where “memory” exists. In our world, it surely does, so it’s hopefully not such a sin to assume that memory may exist. But even if there were no observers in a universe, the rules to correctly predict the future and/or retrodict the past by the knowledge of any data would be identical to those in our world. The rules don’t depend on the association of the “memory” with any observers. The rules just become useless if there’s no one to use them – but “useless” is something else than “invalid”.

Best wishes

Lubos

on February 17, 2010 at 2:41 amJoe PolchinskiHi Lubos,

Thank you for your message. Let me first restate it, to see if we are converging. We are dealing with sufficiency, “observers imply X”. Just to dismiss a red herring, neither of us is discussing necessity, “X implies observers.” Rather, your point is that if X is something that is always true, like 2+2=4 or unitarity, then “observers imply X” is true as a logical proposition but it does not mean that X is anthropic.

In this case, though, X is not always true: in a universe that begins in a typical state there is no arrow of time. (This is the `test case’ that you ask for, which I have mentioned in all three messages). So the question is, what argument is being used to exclude the not-X situations? Sean’s book (which was the point of departure for this whole thing) argues that a dynamical law, a theory of the initial conditions, is needed, and I agree. Your review dismisses this, and when we parse your argument we find at the center of it “observers imply logical arrow of time implies thermodynamic arrow of time.” The only argument you make that excludes the non-X case is the existence of observers.

Regarding your spin argument, for the non-X state above the probabilities of finding a particular pair of electrons in a given set of spin states is the same both before and after the collision, both maximizing the entropy of the subsystem. If I give you the wavefunctions of the system at t=100 sec and t=101 sec, you can’t tell me which is which.

Regarding your comments about memory, I think you are saying that any persistent structure, like a rock, would do as well, because it correlates with a past state of the universe in a different way than it correlates with the future state. So (I think we are agreeing) the meaning of observer can be generalized. But in the non-X case there are no such time-asymmetric persistent structures. So we can weaken `observer’ to `time-asymmetric persistent structure’ or something like that, just as for the c.c. we can weaken `observer’ in a similar way, but it’s just a variation of the anthropic argument.

By the way, this discussion has observational consequences, as Eva Silverstein reminds me. The next round of CMB observations will look for gravity waves, whose amplitude determines the Hubble scale during inflation. Right now this scale is uncertain by many orders of magnitude, and to understand it from first principles likely depends in part on the relative entropy of different initial states.

Best,

Joe

on February 17, 2010 at 7:57 amLuboš MotlDear Joe,

thanks for your reply. As everyone knows, I have never disagreed that physics may want to look for and perhaps find a good theory of the initial conditions and it would be a valuable find. It may never find one, but it may find one, in which case it will become a thrilling major part of the physics knowledge.

What I disagree with is the proposition that the existence of mundane lab phenomena such as “breaking eggs but not unbreaking eggs” has anything to do with the issue of the initial conditions of the Universe or with the multiplicity of the universes inside the multiverse, with the anthropic principle, or even with the Boltzmann Brains, or with any of these “ambitious” ideas.

In particular, a low initial entropy of the Universe is neither a sufficient nor a necessary condition for the second law of thermodynamics to hold in the lab.

It’s completely obvious that the increase of entropy between 8:03 and 8:04 am here had nothing to do with the Big Bang or inflation. Even if the initial entropy of the Universe were low, it could be increasing, but decreased again between 8:03 and 8:04. I clearly need some local laws that are valid now and here (and not just some general statements about one event 13.7 billion years ago) to derive what has been (or what will be) happening between 8:03 and 8:04.

The increase of the entropy in my room between 8:03 and 8:04 was determined by the laws of physics well approximated by QFTs which are local, so they really *can’t* be affected by any details that happened 13.7 billion years ago or billions of light years away or a in a different Universe. The precise derivation of the increase of the entropy is given by the H-theorem and its variations – and they manifestly have nothing to do with the initial conditions, either.

The two paragraphs above showed that a low initial entropy wasn’t sufficient to derive the increase of entropy in my room. It wasn’t necessary, either. The Big Bang could have started with S/k=10^{80}, and the entropy in my (thermally insulated, after the renovation) room would still be increasing today, every minute. Yes, we know that the entropy in the past couldn’t have been higher than 10^{100} today, but one needs neither observers nor any special knowledge about the initial state to know this much. It follows from the 2nd law and the knowledge of the present state.

All the speculative concepts in this discussion, including the eternal inflation, anthropic principle, or even Boltzmann Brains remain strictly separated from the empirical data and they haven’t been needed (or useful) for the explanation of any single observable fact about the reality, with the possible marginal exception of the controversial Weinberg’s anthropic “derivation” of the cosmological constant. If you or Sean Carroll or anyone else tries to identify the observed phenomena – such as the behavior of eggs (much more mundane, low-energy, and low-brow than the cosmological constant) – with any of the speculative notions, you’re just not building upon the truth or rational reasoning. No link of the sort exists.

“In this case, though, X is not always true: in a universe that begins in a typical state there is no arrow of time.”

This is a meaningless statement. You must define a measure that determines a “typical state”. Knowing your anthropic/egalitarian roots, you would probably pick a “uniform measure”. But most likely, no “uniform” measure exists because the total Hilbert space of string theory is infinite-dimensional (note the maximum infinite entropy in AdS5 x S5, one superselection sector, to get sure about it). The uniform measure on an infinite/noncompact set couldn’t be normalized. So the Universe can’t begin in a “typical state”. It can only begin in a state that one describes, at least by *some* nontrivial piece of information.

Less ambitiously, you may have wanted to think about a universe that begins in a more particular but still “high-entropy state”. But it is very untypical for any physical system – including the Universe – to begin in a high-entropy state. In fact, it marginally contradicts the second law. It is a law that any physical system begins with a lower entropy state than where it ends, and it can be derived from statistics applied to *any* local (in time) laws of physics. Your and Sean’s whole argument is based on the assumption that the second law of thermodynamics if completely wrong, and then you seem to be surprised that it leads to paradoxes. It leads to paradoxes only because you have included a tautologically false proposition among your assumptions (the proposition that the initial state is “generic” or “high entropy”).

The beginning of any system – and it is really another general misconception to think that we need to talk about the whole Universe to settle any of these thermodynamic questions – is by definition a low-entropy state. That’s how the words “beginning” and “end” are discriminated. If it didn’t have a low enough entropy, close to zero, one could always ask “what was before that”, which would have been a lower-entropy state. By extrapolating this argument as far as one can, one inevitably gets to the real beginning which inevitably has a low entropy.

Without a Hartle-Hawking-like dynamical law, I can’t tell you what the state exactly was, or even what variables should be used to describe it. But basic general statistical physics is surely enough to determine that it was a low-entropy state. At any rate, any theory – such as your “toy model” – assuming that the initial state was a high-entropy state is instantly falsified by the empirical evidence.

“Regarding your spin argument, for the non-X state above the probabilities of finding a particular pair of electrons in a given set of spin states is the same both before and after the collision, both maximizing the entropy of the subsystem. If I give you the wavefunctions of the system at t=100 sec and t=101 sec, you can’t tell me which is which.”

Note that you have completely ignored – really, denied the existence of – the correct formula, one which is summing over final states and averaging over initial states. It is a basic formula in any introductory text to QFT or even quantum mechanics. You’re surely joking that the collisions of two electrons have to “maximize the entropy” both in the initial and final state, aren’t you?

If I scatter two electrons, I usually take them polarized, so the entropy is zero or very low in the initial state. A pure initial state evolves into a pure final state, so the final state has the same low entropy. I can trace over some degrees of freedom to obtain a slightly higher entropy state, too. It’s still unlikely that the entropy may be maximized after one scattering.

The only scattering problem you can solve with your additional assumption that the entropy is maximized both in the initial state and the final state is the “garbage in, garbage out” scattering problem, and the correct probability for this outcome of your “truncated physics” is 100%. (Garbage refers to your maximized entropy state.) Real physics can never assume that the entropy of the whole system is maximized, especially not the entropy of the initial state:

it’s just not true. Real physics (and all of science, and rational reasoning in general) takes some nontrivial information about the real world, does something with it, and spits a result about another piece of information.

Knowing some information, about the real world, is equivalent to knowing that the system is in a subset of states, and it always reduces the entropy from the maximum to a lower value. Actually, the difference, S_{max} – S, can be understood as the amount of information that we have about the system, relatively to the assumption that it is the “most typical system” of a given type. So whenever physics makes any sense, the entropy is not maximized.

Well, this is not a “100%” statement. It is a statistical statement, in the sense of statistical physics. There is still a probability of exp(-(S_{max} – S_{typical reasonable question})) that you choose the initial state to be exactly the single highest-entropy (mixed) state. Then, it will evolve into another (usually the same, if time-translational invariance holds) highest-entropy state. Indeed, you will find out that in this “garbage in garbage out”, you won’t find any rocks or observers.

But this conclusion about this “world” has no relevance for physics. This “messed up” world is not ours, as we know after we make at least one observation of absolutely anything. And this world, as an initial state, has measure zero among all questions. Moreover, I have argued that it should still be possible to derive that the high-entropy “initial” state actually had to have a pre-history, and to derive the older, lower-entropy states from which it had evolved. There is a slightly different way to formulate the fundamental mistake that you are making all the time: you think that lower-entropy states have measure zero, even for initial states.

But they surely don’t. On the contrary, the maximum entropy initial state has measure zero (using any sensible measure on the space of physics problems). It’s because there is exactly one, essentially 0-bit description of the state, “the maximum entropy state”. On the other hand, there are many many more, up to exp(S_{max}), descriptions how to describe meaningful initial states – e.g. “helium star of radius R, density profile f(r), etc.”. They clearly dominate the ensemble of the physics problems parameterized by the initial states. Every sane person knows that the initial states in almost all sensible physics questions have a non-maximum entropy.

If you give me wave functions at t=100s and t=101s, and they will be wave functions for our Universe, first of all, I *will* be able to determine which is which. For example, the curvature is lower and wavelengths of particles are longer at t=101s. If the two wave functions will be about a universe that is not ours, I might be unable to tell you which is t=100s and which is t=101s. But that’s not a problem of mine or of physics. It’s just a manifestation of the fact that “your giving me wave functions at 100s, 101s” is not all of physics. Of course, if you selectively choose a one-second interval where nothing happened because it was already in complete equilibrium, there won’t be a difference between the two states. But that’s an artifact of your toy model’s not being physical: in the real world, there is always a difference. And more importantly, there’s a difference in the rules how to calculate the (mixed) state at t=101s from the (mixed) state at t=100s, which is a straightforward prediction, and from the opposite calculation, which is a retrodiction that requires logical inference and a choice of convention-dependent priors.

Entropy differences between various stages of the cosmological evolution might be measured in the CMB and elsewhere. Right, it would be great, it will probably not happen with the gravity waves any time soon. But if it will, many things about the history of our Universe can be measured.

What does it have to do with this nearly metaphysical discussion of ours about the arrow of time and/or anthropic principle? Or do you really expect the entropy of the early Universe to be measured as being higher than today? Or will they find Boltzmann Brains or bubbles where time runs backwards?😉 I am apparently completely missing the point of this comment of yours. All macroscopic objects in the world – and, effectively, also the Universe – have some entropy that keeps on increasing with time. The very fact of the increase is universal but the precise amount is not universal – but the latter may often be observed and/or calculated with the knowledge of the detailed microscopic laws.

But once again, the statement that the initial state never has a higher entropy than the final state is the universal truth, and your proclamation that it is typical for objects or universes to have a high-entropy initial state is universally false.

Best wishes

Lubos

on February 17, 2010 at 11:17 amJust LearningWe can think of modern physics as taking two approaches to the problem of why we have the physics we have. The first is the evolutionary biology approach, the second is the classification of defects approach.

The evolutionary biology approach assumes that because every place in the universe shares the same law of physics, those laws must have a common origin. This is analogous to the choice of chemicals in our DNA…e.g. there are several families of long protein chains that would serve the same way that our DNA does, meaning that there is nothing special about the molecule that we use to store our genetic information…so that choice was effectively random, thus supporting the common ancestry of life on Earth.

The anthropic principle is based upon the same sort of thinking for the laws of physics. There is nothing particularly unique about our laws, they were chosen at random in some process.

The classification of defects approach is really just raw mathematics. We assume that we exist in some sort of manifold with defects. Those defects are absolute certainties in manifolds with certain global properties (dimension, dynamics, interaction). The physical constants are simply ratios that are very natural based on the particular global properties of the manifold…and the particular manifold we exist in is mathematically unique. Our real task is to either to derive or discover the true global properties of the manifold.

The naturalness in the physical constants in the latter approach is no different than the naturalness of pi…e.g. a straight line and a circle that encompasses that straight line have a certain ratio. We need at least a two dimensional euclidean differentiable manifold to even make that relationship conventionally meaningful. We never debate about the origins or justification for pi…we don’t see it as the outcome of an evolutionary process.

I think the latter is more natural. In particular, the former always creates a logical inconsistency in the requirement of a process that exists outside the universe in some sense.

on February 18, 2010 at 6:52 amJoe PolchinskiHi Lubos,

We are starting to go in circles and I am running out of time, so this will be my last reply, I am sure that you will have the last word. There are some interesting points in what you say, but also a repeated mixing of observation with explanation so that it is difficult to disentangle, and sometimes you attribute things to me that I have not said. We all clearly recognize the importance of local laws, and Sean has done a service in explaining Boltzmann’s work to the public, but the arrow of time, like many properties of our universe, is something that depends both on local laws and initial conditions. Boltzmann would have been very satisfied if he had been around to hear about the expanding universe.

I still don’t agree with the statement in your review, that it is an explanation for the arrow of time that observers cannot exist without it. (By the way, I think your review also says that Boltzmann used the existence of memory – your logical arrow of time – as an input. I regard it as derived from the existence of a low entropy initial state.) In your latest explanation you are shifting more to the nature of the initial state, so some of your arguments are more physical though I don’t agree with all of them, and many still invert observation and explanation. For example, arguing the existence of an arrow of time from the fact that we have words for “beginning” and “end” is logically true, but as a scientific explanation it sounds rather anthropic, doesn’t it? I don’t at all get the point of your repeated discussion of scattering — it assumes an arrow of time, it doesn’t explain it. Also, it’s not mathematically correct to replace \leq with < to derive the low entropy of the initial state, you need the latter to justify the former.

One might try to argue that any theory with gravity provides a suitable initial condition, but the HH wavefunction may actually be a counterexample. It is not so well-defined because of contour arguments, but within the space of positive cc vacua it may give a typical state in my sense: its magnitude-squared is equal to the de Sitter entropy. (Sean alludes to this in footnote 275). It's true that the data that distinguish different initial states will be limited, but as you note the detection of gravitational wave in the CMB would be a great thing.

Best,

Joe

on February 16, 2010 at 10:49 pmGiotisThe right question to ask I think is what is the explanatory power of the anthropic principle and what it explains exactly.

The weak anthropic principle i.e. we measure the quantity X at a specific value because otherwise we wouldn’t be here to measure it, does not explain the value of the quantity X but instead explains partly why we are here.

This statement has scientific value because it explains partly our existence but of course it does not explain the value of the quantity X. Moreover it can be used to constrain the input data if needed and to make predictions. For example Weinberg used the anthropic principle to predict a small cc years before its value was actually measured.

The strong anthropic principle i.e. the quantity X *has to have* the measured value because human life *must* exist, explains the value of the quantity X but in a totally unscientific manner since it arbitrary places life in a dominant position inside the cosmos. Like the weak anthropic principle though it can be used to constrain data and to make predictions.

My understanding (putting aside the probability measure that Lubos mentioned for a while) is that the weak version of the anthropic principle is used within the context of the multiverse but within that context it is a totally vague tautological statement which does not explain what the theory wants to explain, for example it does not explain the cosmological constant. The explanatory power in this case lies instead in the plethora of the vacua and the anthropic principle is just an unnecessary misused historical or philosophical remnant. It’s not a principle in this context; it’s just common sense that nobody can object to.

on February 18, 2010 at 10:16 amLuboš MotlDear Joe,

thanks for your new reply. Yes, we’re mostly running in circles. Apologies but you don’t seem to be listening and you don’t seem to think about real physics or real formulae in this case. I find your discussion somewhat religious in character.

You say that the arrow of time depends on initial conditions. You haven’t provided us with any evidence of this statement – it seems to be just a rationally unsubstantiated basic dogma of yours – and I think that the statement is manifestly wrong.

The arrow of time gives an “orientability” to the spacetime as a Lorentzian manifold. Each timelike vector located anywhere can be said to be either future-directed or past-directed. This extra structure is needed to do physics (e.g. to derive the scattering with undetermined spins in a small region of spacetime): this logical arrow of time determines in which direction the world is actually evolving, which is completely necessary a piece of information to know for any problem involving evolution in time and incomplete information (or anything that depends on the logic), and thermodynamic arrow of time can be shown to coincide with the logical arrow of time.

This association of this one bit of information with all timelike vectors is clearly local and has nothing to do with any initial conditions somewhere in the distant past. I find this point self-obvious, the proofs of H-theorem never talk about the Big Bang because they don’t have to and because the Big Bang has nothing whatsoever to do with particular local events, and I think that it is obvious that it is you who should be presenting some evidence for your extremely strange proposition that contradicts statistical physics as we know it. (Boltzmann’s discussion of the entropy near the “birth” of the Universe is just an application of his findings, not a universal tool needed for him to discuss the lab phenomena.)

You say that I am “mixing observation with explanation”. Well, I am indeed “mixing” them in some well-known way: the character of mixing them is that all the explanations and their validity must actually be judged by observations. When it comes to your statements about the second law of thermodynamics, you indeed don’t seem to be making the same “mixing” – also known as empirical testing of hypotheses. That’s why you apparently have no trouble in assuming hypotheses that are manifestly wrong on empirical grounds.

The initial conditions of the whole Universe, if they existed and were found in their full glory, would be an independent piece of physical laws, aside from the laws for evolution in time. But it would still be true that nothing about these extra laws about initial conditions – none of their actual features – would play any role for the proof of the H-theorem because the latter simply doesn’t depend on cosmology. It has nothing to do with cosmology. Our getting older has nothing to do with cosmology, either. It only has something to do with the local laws of physics and the logic that must hold locally in every small region of spacetime.

“For example, arguing the existence of an arrow of time from the fact that we have words for “beginning” and “end” is logically true, but as a scientific explanation it sounds rather anthropic, doesn’t it?”

I really don’t understand what you want to “explain” here. There are no extra “data” to explain. The logical arrow of time is necessary for logical reasoning – for the formulation of any realistic theory – as long as the propositions refer to events located in time and as long as some information may be known and some other information may be unknown. No complex enough reasoning about events in spacetime is possible without this logical arrow of time. I don’t know what it means to talk about physics without the logical arrow of time. Trying to “explain” why the logical arrow of time exists is the same as asking why the logic exists, or why integers exist. It’s a philosophical flapdoodle without any physical content. Moreover, in your and Sean’s case, it’s not just an innocent vacuous talking about nothing: you actually use it to justify incorrect theories and misguided research into non-existent physical phenomena.

In the very same way, asking why it is the future that evolves from the past, and not the other way around, is a meaningless linguistic exercise. The word “future” is defined as what is evolving from the “past”. By the H-theorem, we can also show that the “future” is the point in time where a closed physical system has a higher entropy than in the “past”. This is one bit of information – determining which of the moments is the future and which of them is the past – and the answer to this bit is a pure convention (definition of the words denoting future/past). There is absolutely no new physics here, so there can’t be any new “explanation”.

“I don’t at all get the point of your repeated discussion of scattering — it assumes an arrow of time, it doesn’t explain it.”

That’s the whole point of my comments. I am telling you that you need the logical arrow of time to exist and be used for any, arbitrarily simple elementary process in Nature, such as a scattering of two electrons, to be studied by science. The laws of physics as we know them do always automatically contain and have to contain the arrow of time as a part of their logic. There are no laws of QED (with unknown information included) that would have no arrow of time. It’s always there, from the beginning. The laws can’t exist without it just like they can’t exist without the superposition principle of quantum mechanics, associativity of multiplication of operators, or anything that is equally fundamental. That’s why it’s completely misguided to look for “further explanations” or “new processes” or “adjustments” to “explain” the arrow of time (again).

There are no new processes and no new insights to be found. The explanation is fully included in the basic laws of physics – it is necessary for as simple (and microscopically, assuming pure states, C,P,T symmetric) exercises in physics such as a two-electron scattering in QED. When the information is incomplete, the logical arrow of time and the past-future asymmetric formulae are needed to make even the most elementary calculation. The idea of an extra “explanation” of the arrow is pure rubbish, a key statement about all of this discussion you seem to be unwilling to even listen to. It’s like I am not even writing it. You seem to have been given this “mission” to find another explanation of the arrow of time directly from God and no amount of evidence can convince you that your assumption is fundamentally wrong.

“Also, it’s not mathematically correct to replace \leq with < to derive the low entropy of the initial state, you need the latter to justify the former."

I haven't derived the strict inequality because one cannot "strictly" derive that the entropy of something can't be at the maximum value. One can't prove the statement simply because it's not true. It is just a probabilistic statement. The entropy of a physical system, including the Universe, simply can be maximized (assuming that there is an upper limit). So if you're looking for a universal proof that it can't ever be maximized, you are looking for a proof of yet another untrue proposition.

Physical systems can maximize their entropy "somewhere" in time, but it's usually not the "beginning" because by the 2nd law, the beginning has a lower entropy than its relative future. And the cases when it's the "beginning" where the entropy are maximized are extremely uninteresting. Again, noting that the beginning is not the state where you shouldn't expect a high entropy is a tautology. There's nothing to "explain" here, either.

Moreover. Whatever "initial" entropy of a physical system someone gives me, I can still ask "what was before that". By further logical inference – done correctly, using Bayesian inference – one can determine that the "initial" state has actually evolved from another state that preceded it, and the "another state" which may be reconstructed – probabilistically – had almost certainly a lower entropy.

In other words, one can show that the high-entropy state wasn't a real ultimate "beginning" of the physical system because there was another state before that, and one can deduce the probabilities that it was one state or another. Almost all such "ancestor" states one derives are lower-entropy states than the high-entropy "initial" (not really "initial") state you gave me. This fully solves your problem. The problem was that you gave me a high-entropy state and you intimidated me into believing that it was the real "beginning" i.e. that there couldn't have been any time before that. But I know that this assumption is incorrect. Whenever the entropy of something is high, there was time before that when the entropy was lower, and one can use logical inference to deduce what such a state could have been (priors are always needed for any retrodiction).

These general discussions have nothing to do with gravity. Quantum gravity may tell us details about the precise form of the Hartle-Hawking wave function (because in the Euclidean spacetime, a spherical space may be continuously deformed to a point or nothing), but it will surely not change the logic – or add "new explanations" – to things like the derivations of the second law. This whole promoted link between advanced and/or speculative physics on one side and mundane thermodynamic phenomena on the other side is complete bogus. I think it's true and dishonest to justify expensive experiments to observe gravity waves by their ability to settle some philosophical discussions about the increase of entropy in the everyday life: they have nothing whatsoever to do with them.

Best wishes

Lubos

on February 18, 2010 at 10:31 amLuboš Motl“True and dishonest” in the final paragraph should have been “untrue and dishonest”, sorry.

on February 18, 2010 at 12:44 pmLuboš MotlAlso, “nothing that” should have been “note that”.

Just to be sure, a thought experiment showing how ludicrous it is to link the second law to the Big Bang.

Imagine that there is a Minkowski, lambda=0 vacuum of string theory somewhere, and there are people living in it. (It may require unbroken SUSY, and it’s hard to reconcile unbroken SUSY with life and atoms, but imagine that this technical exercise can be solved.)

Their astronomers have also found out that the galaxies seem to be going away from each other. But they do see that there was a special center of the explosion – and something heavy exploded while the rest of the space was probably pretty empty at that time. They’re breaking eggs, not unbreaking them, and so on. Obviously, all their local observations linked to the second law are qualitatively just like ours. The entropy of the Universe (mostly one chunk of matter in a Minkowski space) when the astronomers live is 10^{105}.

The first message is that the Big Bang is not even needed for the second law to work just like in our world. There are no “t=0” initial conditions in such a Minkowski space.

But let me continue with the reconstruction. They find out that there was an explosion 14 billion years ago and the exploding stuff was very heavy. They find that the entropy of the initial state at “t=0” or slightly afterwards – in the Minkowski space – was high. It was significantly lower than when the cosmologists lived, because the entropy always had to increase, but it was pretty high right after “t=0”, anyway. Something like 10^{90}, leading to 10^{105} of their time.

However, other cosmologists are still surprised by the high entropy – and temperature – of the explosion. They continue to study it and they actually find out that it wasn’t quite a spherically symmetric explosion. The galaxies show a tetrahedral symmetry (the discrete Gamma(E6)), while the full rotational symmetry is slightly broken.

So they eventually found out that it wasn’t an explosion but a collision of 4 heavy natural or God-created😉 bombs, coming from the four directions of the vertices of a tetrahedron to its center.

Another generation finds out that these four colliding objects had masses 10^{60} kg, and their shape was a collection of pieces of “thick cosmic strings” made out of uranium, which were actually frozen near absolute zero before the collision. Needless to say, the shape of each of these four colliding heavy uranium objects looked like the characters “360 factorial”, connected with pretty thin iron sticks. They decoded it because an early astronomer actually saw some pieces sooner after they collided, and left a cryptic message “398…896” in his notebook.

The final picture – their cosmological standard model – is pretty clear. Before t=0, four frozen cold huge uranium bombs shaped as “360 factorial” written in the Latin alphabet were moving along trajectories corresponding to the vertex directions in a tetrahedron. They collided, warmed up, exploded, created the seeds of all galaxies, and then the galaxies continued in a similar way as they do in our Universe.

Just to say, the temperature of the uranium bombs was found to be near the absolute zero, so the temperature was low, too. So physicists would be discussing whether the temperature of the bombs was exactly absolute zero – something that the experiments can never fully settle – or slightly warmer. At any rate, the entropy of the Universe, previously known to be 10^{90} during the “t=0” explosion/collision, was lower before “t=0”.

Just because the physicists were allowed to ask “where did it come from”, they could have seen that the “t=0” state had evolved from another state at a negative value of “t”. The latter obviously had a lower entropy than the “t=0” state, near zero, because the second law always holds – the arrow of time shows its muscles in every small region of spacetime. Before the collision, when the bombs were approaching each other, their (low) entropy wasn’t changing too much. But if it were, it was increasing.

If the configurations of the bombs are extrapolated to “t=-infinity” in the Minkowski space, people would probably get some colliding objects that had some temperatures (possibly independent ones, because they were separated by asymptotically infinite distances from each other). Consequently, they carried an entropy that could have been nonzero, too. Whether the entropy was zero, small, or large at “t=-infinity”, the evolution after “t=0” could have worked almost just like in our Universe.

This example is awkward, and deliberately awkward. But its purpose is to return the dear reader into common sense. The evolution of a lab billions of years after some important event in the Universe has nothing to do with this event in the Universe. It’s driven by the local laws of physics. And the beginning could have been at a fixed time, like t=0, or at the asymptotic past, t=-infinity, and the entropy at the “earliest possible moment” could have been either zero, or small, or large. All options and their combinations are actually possible. The only exception is a large entropy at “t=0”, or another finite moment. If that occurs, and your quantum gravity theory is consistent enough to “resolve” singularities, you may always go before “t=0”, so it is somewhat problematic to call it “t=0” and reconstruct the state at a negative “t” (at least probabilistically). It’s just a name, or a conventional value of a coordinate, but there’s not God-given physical feature that would make this moment a “beginning”. Instead, you can always go before it. We use the word “beginning” for a point in time such that you can’t go before that.

The only thing that can prevent you from going before “t=0” is if the entropy is already tiny or zero at “t=0”, so going before that would violate the second law because the entropy couldn’t increase at those times. In that case, it’s the real beginning, and there can’t be any state of the same system from which the low-entropy “t=0” state had evolved. It could still have bubbled/separated from a larger system with a higher entropy, but that’s another discussion.

But whenever the entropy is positive and large, you can either go to earlier moments of time, when the entropy was lower, or you can indeed prove that the initial state was at equilibrium, at least piecewise – like my four nuclear bombs.

There are fun technical points of the particular cosmological setup in this toy universe. But a key point is that the evidence about the phenomena in the lab cannot tell us anything about the entropy or other properties in a distant past – they’re logically independent portions of physics separated by dozens of order of magnitude on the distance scale. It’s just utterly absurd to link or identify these two (cosmology and thermodynamics in a lab).

Cheers

LM

on February 18, 2010 at 11:42 amJust LearningIt is very clear that a world governed by the heat equation or Ricci flow will always appear asymmetric to observers.

The universe will evolve into a pure de-sitter space. If one consider power instead of energy, this amounts to an expenditure of power approaching infinity.

Information, variance and power are all functionally equivalent. A spike in power that energizes a process, such as one governed by the heat equation or the Ricci flow does not immediately imply that you will get the products of a long evolution (ie a spike in power is not going to give you a Boltzmann Brain). The evolution governed by the flow must occur to get the products of that flow. This is guaranteed by the asymmetry of the flow itself.

You simply can not reach the earlier microstate by reversing the flow. All you can do is provide the initial information (power) for the flow from the beginning. Sean and his brain does not seem to understand this simple mathematical fact, and I am disheartened that other physicists who I have tremendous respect for don’t seem to understand this.

Once again, a spike in power is not the same thing as going backward in time…you can not “spike” yourself into a microstate that is the result of the forward operation of the heat equation. Call it the ultimate diode.

At best, we can postulate a small set, or even a spectrum of plausible initial conditions that a power spike can reach which would then forward evolve into the universe. These would have to be very distinct in and of themselves since they, once again, could not be a product of forward evolution governed by the heat equation.

If this is too hard, just think of a gasoline engine. It will run if I stick gas in the tank, but it won’t run if I stick exhaust in the tank.

on February 19, 2010 at 11:05 amJust LearningAnother point…

Whether we go the Copenhagen route or the MWI route with decoherence…it should be surprising to people that nature requires us to restore the irreversibility of the heat equation as we move from the quantum realm to the classical one.

If people really wanted to understand time they would research how reversing time forces one out of classical or decohered state, back into a quantum state…one that is, as it is often pointed out, reversible.

It should also be a significant physical observation that powerful releases of energy…like the big bang, don’t reverse time, but instead have a tendency “to smooth things out” and “restore higher frequency modes”. Oddly enough, the backward heat equation also has the tendency to restore higher modes and create a lot of very energetic smooth noise (since the lowest frequency modes are the only ones that won’t grow uncontrollably).

on February 20, 2010 at 8:10 pmLuboš MotlDear Just Learning,

decoherence is another, time-asymmetric process that inevitably shares the same arrow of time with the logical and thermodynamic arrows of time. But it is not true that it is needed for the thermodynamic arrow of time itself to emerge.

(By the way, decoherence doesn’t require the many-worlds interpretation. Decoherence is a real process that only depends on quantum mechanics, its methods to calculate probabilities, many degrees of freedom, and the tracing over some degrees of freedom that are identified as the “environment”. The tracing always creates a mixed state in the future from a pure state in the past: we may “forget” the information but we cannot “unforget” it, an asymmetry coming from the logical arrow of time into the derivation of decoherence.)

It is not true that the breaking of the time-reversal symmetry by the behavior of macroscopic objects is related to the quantum-classical transition. The time-asymmetric effective heat/diffusion equation, or friction, or any physical effect that contains first time-derivatives which are time-reversal asymmetric, may be derived both in classical and quantum physics, in one form or another. These asymmetries are linked to the increasing entropy of complex systems, and the entropy increases both in classical physics and quantum physics.

In fact, because entropy is only useful in the thermodynamic limit, where the number of degrees of freedom is infinite, this limit automatically includes the emergence of some classical limit from quantum mechanics, too. That’s why the thermodynamic phenomena are effectively classical even if you start with a quantum mechanical microscopic theory. The logic is the same. Quantum mechanics shows that the entropy can be zero – which corresponds to one microstate – and generally, it determines the additive shift in the entropy which was uncertain classically. But otherwise there’s no “interpretational” difference between classical and quantum statistical physics.

While the logical, thermodynamic, and decoherence arrows of time are inevitably aligned, the cosmological arrow of time (pointing in the direction of an expanding universe) is independent. The Universe – not ours, but a different one, with a bigger mass density over the cosmological constant – could stop its expansion, but the entropy would still continue to grow. A related fact from the later evolution is that the macroscopic behavior near the Big Crunch wouldn’t be just a time-reversed mirror image of the Big Bang.

Best wishes

Lumo

on February 21, 2010 at 2:25 pmJust LearningLubos

Thank you for taking the time to correct some of my misconceptions. I will place some blame on the book From Here to Eternity when it comes to my statement linking MWI to decoherence, as that book was the most recently read and it implied such a dependence; my obvious lack of formal education on the subject led to me not properly filtering my statement.

My view on these matters is still evolving. I am taking a somewhat solipsist view of time and initial conditions and am very rigorously defining the only initial conditions that have any importance are those that exist at the present moment (a presentist point of view).

For instance, at a particular instant a brain is given a particular set of information, and from that input of information, it must produce some output of action. How does it sort the data in order to choose the most consistent action?

An input-output scenario assumes an evolutionary process, so it would be logical for the brain to pick some set of evolutionary equations to solve, and then take action based on the solutions found.

What does the brain find? Whether we choose to iterate the input-output equations or solve directly, we do have to come up with some coordinate that keeps track of our position in the evolution, so we can call that time. The data presented to the brain can be treated in two different ways, it either represents initial values or final values. It would seem practical, if not necessary, for the brain to look for solutions based on evolution in either time direction.

I will say that brain initially organizes the data in two spatial dimensions and then evolves it backward and forward in time, which the brain equates as a third spatial dimension. As the data is evolved backward, one finds an incredible amount of error accumulating as a square law around a increasingly smaller set of solutions. So in order to store all the data, the brain may choose to use a spherical storage system (radius equals time). The effect is that as one goes back in time, the resolution decreases to the point where all one sees is a lot of uniformly distributed, impenetrable, high frequency noise.

Now the brain focuses on the forward evolution, this it actually finds fairly easy to find precise solutions, the evolution is extremely predictable, however it finds that although it has extremely fine control in predicting the outcome of a single action, if it tries to predict the outcomes of the complete set of outcomes, it has to keep track of an extremely rapidly growing set of data. The task of storing all that data is staggering.

The brain is clever though and it recognizes that its action will change the input data from which it makes future predictions, so in the interest of saving storage space, it only predicts the future out to some preferred point of time, and then adds that data to its database. It then may or may not decide to only look at infinitesimal changes of time for the sake of prediction depending on whether it needs that level of sensitivity.

Some may be tempted to draw some parallels to disembodied brains (which seems to be closely related to solipsism), and to an extent I may be inclined to agree, but I think that many of the authors of brain discussions are being intentionally vague and imprecise, so it is extremely difficult to decipher what they are actually trying to say.

Absent in many of the cosmological discussions are the importance of interacting observers in making our universe what it is. It is our interaction with other observers which narrows the selection of the priors for making retrodictions, however, I don’t see why cosmologists are surprised that all retrodictions inevitably lead to a state of smooth, high energy noise.

I’m sure my stance on this will certainly evolve as my understanding does, I’ll be curious if there are any further comments on this.

on February 21, 2010 at 4:58 pmJust Learning“As the data is evolved backward, one finds an incredible amount of error accumulating as a square law around a increasingly smaller set of solutions.”

I meant the above as an plausible example, not a literal truth. An illustration of what I am trying to reason can be found in Numerical modeling of ocean circulation By Robert N. Miller on pg 31 (this book is on google books).

on February 21, 2010 at 5:52 pmLuboš MotlDear Just Learning,

thanks for your interest.

Like other things, your link between MWI and decoherence has some true core – most sociological, in this case. Hugh Everett, the main father of MWI, is often credited for leading other people to think about “classical histories” in the context of quantum mechanics, and those can decohere. So he didn’t really discover decoherence but his MWI thinking was helpful.

Today, decoherence would be most likely technically associated with the “Consistent Histories” interpretations of quantum mechanics, my choice, and most advocates of CH would probably view CH as a refined modern version of the Copenhagen interpretation, anyway. What matters is that it’s just probabilities that can be calculated.

Concerning decoherence, it’s kind of fun to read Wojciech Zurek’s article published in Physics Today,

http://arxiv.org/abs/quant-ph/0306072

because it’s a sort of popular physics that is important, anyway.

Your attitude, presentism, is a legitimate attitude to the “reality” of spacetime vs space at the present. Results such as the “free will theorem” in quantum mechanics

http://arxiv.org/abs/quant-ph/0604079

do imply that the present can’t be decided in the past, so it’s really being decided or “lived” now. Well, I think that even a solipsist, when reasonable, should admit that it’s useful for him or her to assume that there existed some past😉 that has influenced the chances of the present outcomes, and that does explain most of the patters seen today.

So far, I don’t quite understand your description of the brain dynamics, but it’s plausible that you have understood something nontrivial, and correctly. Well, I think that the brain evolves together with the rest of the Universe, so the rest of the Universe, and perceptions from the past observations, are inevitably being imprinted to the brain. That makes the brain entangled with the events in the past and their patterns, which always allows a smart enough brain to predict the future, after it extracts the right theories explaining the patterns in the past, as well as indirectly retrodict other events in the past that were not directly accessible to the brain.

Theories and all kinds of rational evidence always builds on the logical evaluation of the events in the past (light cone), as imprinted into our memories, but it is the predictions of our (reverse engineered) theories into the future that can be done “directly”, giving us unique probabilities for our questions. Retrodictions require logical inference and a choice of priors.

The arrow of time etc. is obviously imprinted into the way how brain works etc., but it’s important to note the relationship here: the arrow of time, much like other laws of physics, affects the brain. It’s not that the brain affects the laws of physics.

Best wishes

Lubos