Fundamental physics has a strong appeal to the imagination. There ought to be some underlying structure to our theories of physics, something beautiful and intuitive that explains the parameters of the standard model, unifies gravity and quantum mechanics, explains puzzles like the black hole information paradox, and probably has a few bonus surprises in store for us. This is probably one of the main reasons many people with strong expertise in diverse disciplines take an active interest, read blogs and popular books on the subject, and sometime try to lend a hand and help.
It is natural for a person with an intimate knowledge of some theoretical structure to try to apply it to the main few questions of fundamental physics. Could this fundamental theory involve spin chains, Turing machines, cellular automata, or your favorite (sort of) Lie super algebra? This looks much more likely if you devoted your career to studying the intricacies of these structures. There is of course nothing wrong with that, nobody has any idea what the fundamental theory of our universe looks like, the issues are difficult and we can use all the help we can get. To facilitate such help I’ll offer some unsolicited advice: Nature is relativistic, and this fact is crucially important!
In fact, Lorentz invariance is so important, and relativistic systems are so entirely different from non-relativistic ones, that there is whole discipline (theoretical high energy physics) which is devoted almost exclusively to the study of Lorentz invariance and its consequences. Along the way we stumbled upon a few things that may possibly be interesting and relevant in our search for that underlying structure.
The message has two parts really, both need some unpacking so I’ll only summarize them here. First, fundamental violations of Lorentz invariance, meaning the idea that Lorentz symmetry is not an ingredient of that holy grail, the fundamental theory of our universe, are strongly excluded by experimental constraints. It is not known how to break Lorentz symmetry in a way that induces only small effects on observable physics (unlike for example symmetries like baryon number that can be violated by a small amount; technically: there are many new relevant and marginal operators once you allow for Lorentz violations at high energy, their coefficients are constrained to a ridiculous degree by experiment). Therefore, all the evidence we have indicates that the fundamental theory is Lorentz invariant.
(That is not to say that Lorentz invariance cannot be violated spontaneously. Lorentz invariance is the symmetry of empty space, and in our universe space is not really empty. However, in that case the equations governing the dynamics are still symmetric, the physics at high energies (or short distances) is insensitive to the breaking, so most of the consequences of the symmetry are still there).
Secondly, on the more theoretical side, Lorentz invariant systems are almost impossible to quantize. It took a series of miracles over a few decades to find a consistent quantum mechanical Lorentz invariant theory, namely quantum field theory. In other words, the demand of Lorentz invariance, made impossible to ignore by experimental constraints, is highly restrictive theoretically. For example, there is a whole slew of no-go theorems (which go under such names as Weinberg-Witten or Coleman-Mandula), but also common lore considerations and just common sense arguments that pretty much exclude many models of fundamental physics from the get go, without having to get into any details.
For the innovative outsider, this message brings both good news and bad news. The good news are that if you ignore the restrictions coming from Lorentz invariance (or better still, if you manage to find a convincing loophole), the classic open problems of fundamental physics become wide open. Lots of ideas that are not even entertained by the community of experts (or have long been dismissed by them, as the case may be) can suddenly become relevant. It is likely you can come up with many wonderfully creative insights, write papers and popular articles, and make some real interesting contributions. The bad news are left as an exercise to the reader.
Oh yeah, almost forgot, that business with the computer in the title… As is probably clear by now, this post is a bait and switch kind of post; it is not really specific to computers, quantum or otherwise. In fact, as far as I know, as long as you don’t define precisely what you mean by a computer (something that some advocates of the idea carefully avoid), the universe could very well be a computer (same goes for my socks) . But serious people would probably have a computational model in mind, something like a Turing machine or the standard circuit model of quantum computing. In any such a model, the first thing I would try to understand is how Lorentz symmetry is implemented.
This should not be simple, as there is some tension in my mind between existing models of computation and Lorentz invariance. Namely, in the Lorentz invariant theory the space of states is always continuous. In technical Language the Lorentz group is non-compact and all its representations are continuous. In plain English: you can boost any state to have an arbitrary momentum, which can be any real number. Any fundamental discreteness, such as is common in all computational models I know of, would have to be of a very special sort to hide its Lorentz violating effects from existing tests of special relativity (no need for any new tests). Perhaps this is possible, but how this can be done is a mystery to me, and so this is to my mind the first serious test of any discrete model purporting to be a theory of nature.
While we are at it, maybe starting small would be more appropriate – let’s forget about everything and concentrate on something, especially something we already know well. How about quantum electrodynamics, the theory of photons and electrons, the familiar playing ground where we know how to calculate pretty much anything, can that theory be efficiently simulated by your favorite universal model of quantum computation? I for one would be interested if it did.



The constraints on Lorentz-invariance come from UV, right? But what happens close to the big bang? All high energy experiments will confirm local Lorentz invariance, but where does the background end and where does the dynamics begin? Do we even have a way of defining the problem in a good way in the background of the expanding universe?
In a sense, you have already addressed this point because maybe each universe is a choice of vacuum (or state) in a fundamentally Lorentz invariant theory, but this seems weak to me. It seems a lot like we are brushing a lot of issues under the rug of ignorance. In a different background/asymptotics, it is not even clear to me what are the right claims or questions. Any comments would be much appreciated.
Good point. In extremely short distances we don’t know pretty much anything. But just below the Planck scale we should be able to discuss the effective field theory resulting from your favorite theory. Such EFT better be Lorentz invariant to fantastic accuracy, because even minuscule violations of LI at such small distance scale would likely result in conflict with low energy experiments.
Most natural guess then is that your fundamental theory, whatever it is, is itself LI, if it is not you have a lot of explaining to do.
Well, assuming that we were living in a massive computer simulation, assuming something this truly incomprehensibly fantastically mind-bogglingly large were possible, it would have to be so beyond the level of what we call computing today it would be useless to even term it as such. It’s kind of like comparing the human brain to a computer – sure, they both process information, but they do it in such radically different ways currently that all comparisons stop after a very basic level.
If we are currently in a simulation, I don’t think there would be much in the way of being able to tell we were unless we could look beyond our local sphere. Much as the size and shape of the earth were speculation until we looked out at our local cosmos, until we can see beyond the edges of the universe, like simulating the big bang conditions we currently can’t observe visually (except for the microwave background but that’s just echoes), sensing beyond the edges of the visible universe or probing below the Planck scale where our current measurements break down, we wont be able to tell.
Imagine being in one of our computer simulations – at some point you would reach an area you couldn’t see past, one you couldn’t comprehend because you are made of bits and have senses that literally end at the end of the simulation. You might, in the constraints of the world, be able to build machines that help you probe the edges, that you can look down with and find out you’re made of bits (quanta), you can look to the edges and determine there are some forces out there interacting with your world even if you can’t perceive them (dark flow, dark energy, dark matter). It’s easy to make the analogies. Finding out the truth, however, would be tricky, because the simulation you are in doesn’t contain the ability to contain enough information to map the system that’s running it, because the system required to run it is necessarily orders of magnitude larger (at least as we understand things).
i’m not advocating that our universe is a simulation, it’s just a fun thought game. If there was a ‘computer’ large enough to continuously process 10^80 atoms of matter and all the sub-atomic particles that make up those atoms (I can’t even imagine what calculation would be necessary to come up with a rough estimate for that number but I might guess it involves an order of magnitude near or exceeding our planet’s favorite search engine), according to the way we currently process information, the scale of what contains our “Universe 1.0” software would be about the scale of our universe to our biggest supercomputers.
Think about how long it took bacteria to evolve into life that could comprehend the scale of rocks, rivers, mountains, the earth, the solar system, the galaxy, the universe. If something is simulating our universe, we’re at the level of bacteria when compared to that something. Luckily life grows sorta exponentially, so it’ll probably only take us hundreds of thousands rather than hundreds of millions of years to solve the problem of what lies on the outside of our universe.
Assuming we survive. 😀
My 2 cents:
If someone makes a really good simulation, the inhabitants of the simulation would not be able to communicate to the `outside’ at all. I think that saying that we live in a simulation is interesting philosophy for when one is drinking beer, but it is not a useful description of physical reality.
If we can get a message across to the ‘simulator’, then it becomes more useful as a matter of principle, but until then, it’s not physics.
Actually, I am rather fond of philosophy, and I think any half decent philosopher would have the training and inclination to instinctively cringe at meaningless pseudo-statements.
But, I was not talking about the simulation non-argument. There could be meaningful assertions regarding computational models of fundamental physics, and I think LI is their strongest failure mode.
“That is not to say that Lorentz invariance cannot be violated spontaneously. Lorentz invariance is the symmetry of empty space, and in our universe space is not really empty. However, in that case the equations governing the dynamics are still symmetric, the physics at high energies (or short distances) is insensitive to the breaking, so most of the consequences of the symmetry are still there).”
I don’t understand that. At high energies (Planck scale) gravity becomes important and you can’t ignore the gravitational field. At low energies you can ignore it of course because it is too weak and you can say that your space-time is globally a Minkowski space. That’s why QFT works well but we need a new theory at high energies.
Also again I have a problem with this notion of empty space-time with global Lorentz symmetry. You can’t really escape from the gravitational field. Space-time is the gravitational field and global inertial frames are really local. The global LI is an approximation. That is why GR (a theory of gravity) is invariant under local Poincare symmetry or even better you can introduce gravity by demanding the global Poincare symmetry to become local.
At high energies it’s hard to imagine a smooth geometry anyway. At least some short of fuzzy, non-commutative geometry is to be expected.
So your fundamental theory should be global LI at low energies. Basically you should be able to derive QFT at low energies which is global LI.
At least this my understanding Moshe. Correct me if you think I’m wrong.
BR
“the universe could very well be a computer (same goes for my socks)”
Without justifying too much, what if you had N^2 number of socks and let N go to infinity? The socks are also attached to springs and somehow related to one another by some compact Lie group.
Also, what happens to Lorentz invariance in the infinite momentum frame?
Giotis, I think what you are saying is fine, look at my first comment above to see if you agree.
Lionel, if you can find highly supersymmetric socks, and they interact in very specific way, you may find that your model is LI even if it doesn’t look that way to the untrained eye. There certainly could be other models like that, but this is quite a big burden of proof.
Can the desert of rocks satisfy the Lorentz invariance requirement?
Yes, in that sense I agree. Thanks.
Weak field Lorentz invariance breaking merely requires a chiral vacuum backround (vacuum left foot) only active in a selective observation. Gigalight years of vacuum path exhibit no dispersion, dichroism, or optical rotation from radio through x-ray (e.g., quasar polarized emission).
A chiral vacuum background is inert in the massless sector (photons do not configure; atoms form static crystal lattices), to achiral mass distributions, to unresolved chiral (racemic) mass distributions, and to resolved low amplitude parity divergence mass distributions. No prior observation in any venue at any scale would be pertinent. The smallest self-similar chirality emergence scale easily obtainable is a 0.0147 nm^3 sphere of alpha-quartz crystal lattice.
Eötvös experiments are sensitive to better than 5×10^(-14) difference/average gravitational and inertial mass. Oppose single crystal solid spheres of enantiomorphic space groups P3(1)21 and P3(2)21 quartz. Fitting a vacuum left foot with left and right quartz shoes elicits different energies of interaction and a net non-zero output . If there is no left foot there is no net non-zero signal, as with all prior EP experiments dating back to Galileo and Stevin.
The worst possible output is the SOP output. Any other result is historic. Controls are each chiralty of quartz run against fused silica, and fused silica against itself to validate a null output. Metric gravitation in pseudo-Riemannian spacetime will be an EP = true subset of teleparallelism in Weitzenböck spacetime (theorists save face). Somebody should look.
I posted my reply to some L Motl’s comments here:
http://aetherwavetheory.blogspot.com/2009/02/lorentz-symmetry-and-string-theory.html
I posted my reply to some Motl’s comments here:
tinyurl.com/cr5j9s
oops, wrong blog.
Hope a naïve undergrad question is alright here – why would we expect that Lorentz invariance (or our current experimental evidence) requires continuity, as opposed to very very small discreteness? Wouldn’t we expect that in some sort of “fundamental theory”, frame momenta might well be quantized anyway? The group theory/representation theory behind it will certainly tell us continuity but I’d be very interested if something in the math tells us that the higher-level results we see require lower-level continuity (rather than being a very good approximation).
Sorry, I can comment your posts explicitelly, if you want – but most of my objections are already contained in my replies to LuMo. He is simply more seductive prey for me…;-)
The main point of my objection was, our world isn’t only relativistic, it’s quantum mechanistic as well, and QM violates the Lorentz symmetry heavily on background, being dual to relativity via AdS/CFT correspondence.
Adam: first, discreteness of momenta would be related to long-distance effects (for example if one of the direction was a circle of huge, cosmological, circumference). To get fine spatial discreteness we’d need something new to happen at very large momenta, say there is a maximal allowed momentum. This clearly will break Lorentz invariance, and will be roughly related to some discreteness at short distances.
At some level really fine discreteness is indistinguishable from a continuum. The question is then quantitative: what is the scale of discreteness allowed by experiment, and what does that tell us about quantum gravity. This is highly constrained, turns out any such spatial structure has to be many orders of magnitude below the Planck scale.
Put differently, if we are interested in physics in the Planck distance scale or above, we don’t have the resolution to distinguish between really fine discreteness and a continuum. Maybe there is some such structure at distances way below the Planck length scale, but this will be unrelated to quantum gravity which kicks in the Planck length (or to anything else we know of, for that matter).
Thanks! That’s the kind of answer I was looking for… out of curiosity, what sort of experimental constraints are there on the scale of spatial structure like that?
By AWT this constraint doesn’t exist, but here are less or more significant conceptual limits. For example the rest mass of photon appears zero, but when the wavelength size of photon becomes comparable to observable Universe, the photon cannot move inside it furthemore. After the energy of photon becomes equivalent to its rest mass. Such mass appears low (~10 E-61 kg), but here are even stronger limit. When the wavelength of photon becomes comparable to wavelength of CMB, every photon dissolves in CMB noise, which corresponds the graviton noise of previous Universe generation. By such way, the minimal rest mass of photon is quite large, but it’s limited by observability of photon, not by existence of photon as such.
If some civilization could have look inside of large black hole, he would probably see the same things, which we can observe around us, but it would probably dissolve, if it could visit us. But such civilisation can construct a giant microscope, as large as whole black hole and it can use focused gravitational waves to observe the life inside. After then such civilization could see more, then the single generation of Universe because of tunnelling of information through event horizon. Therefore the observational limit is just a matter of observational scope, by my opinion.
I have to say that there is a certain tension between this post (on the importance of Lorentz Invariance, as a short-distance symmetry), and your previously-expressed opinion that spacetime geometry is “emergent.”
(For instance, which Lorentz group are we talking about? SO(3,1)? SO(9,1)? SO(10,1)? The answer depends very much on what question we are asking, and in what situation.)
That’s not to say that I disagree with any of the specific points that you make. It’s just that, when we include gravity (which the folk hawking alternatives surely mean to do), the situation is more subtle. Not less constraining than what you describe; in fact, I would argue, more constraining.