Feeds:
Posts

GR turns 100

For those of you who have free time to read on the history of gravitation, here is a good link to many famous papers on the subject:

http://journals.aps.org/general-relativity-centennial

Happy anniversary GR!

If some of my students were writing problems

$\vec v_A, \vec v_B; v_{AB}?$

Faster than light neutrino claim

Well, the press is all fired up about a claim of faster than light neutrinos. The claim from the OPERA experiment can be found in this paper. The paper was released on September 22nd and it has already gotten 20 blog links. Not bad for a new claim.

Considering that the news organizations are happily bashing special relativity, one can always rely on XKCD to spin it correctly.

Now more to the point: the early arrival time is claimed to be 60 nanoseconds. The distance between the emitter and the observer is claimed to be known to about 20 cm, certified by various National Standards bodies. A whole bunch of different systematic errors are estimated and added in quadrature, not to mention that they need satellite relays to match various timings.

60 nanoseconds is about the same as 20 meters uncertainty (just multiply by the speed of light) and they claim this to be both due to statistical errors and systematics. The statistical error is from a likelihood fit. The  systematic error is irreducible and in a certain sense it is the best guess for what the number actually is. They did a blind analysis: this means that the data is kept in the dark until all calibrations have been made, and then the number is discovered for the measurement.

My first reaction is that it could have been worse. It is a very complicated measurement.

Notice that if we assume all systematic errors in table 2 are aligned we get a systematic error that can be three times as big. It is dominated by what they call BCT calibration. The errors are added in quadrature assuming that they are completely uncorrelated, but it is unclear if that is so. But the fact that one error dominates so much means that if they got that wrong by a factor of 2 or 3 (also typical for systematic errors), the result loses a bit on the significance.

My best guess right now is that there is a systematic error that was not taken into account: this does not mean that the people that run the experiment are not smart, it’s just that there are too many places where a few extra nanoseconds could have sneaked in.  It should take a while for this to get sorted out.

You can also check Matt Strassler’s blog and Lubos Motl’s blog for more discussion.

Needless to say, astrophysical data from SN1987a point to neutrinos behaving just fine and they have a much longer baseline. I have heard claims that the effect must depend on the energy of the neutrinos. This can be checked directly: if I were running the experiment, I would repeat it with lower energy neutrinos (for which we have independent data)  and see if the effect goes away then.

Black holes as frozen stars

We now have a few working examples of a microscopic theory of quantum gravity, all come with specific boundary conditions (like any other equation in physics or mathematics), but otherwise full background independence. In particular, all those theories include quantum black holes, and we can ask all kinds of puzzling questions about those fascinating objects. Starting with, what is exactly a black hole?

First quantization, first pass

Suppose you want to solve a linear partial differential equation of the form $O \psi(x) = j(x)$,  which determines some quantity $\psi(x)$ in terms of its source $j(x)$. Here x could stand for possibly many variables, and the differential operator $O$ can be pretty much anything. This is a very general type of problem, not even specific to physics. An example in physics could be the Klein-Gordon equation, or with some more bells and whistles the Maxwell equation, which determines the electric and magnetic fields.

Let us replace this problem with the following equivalent one. If we find a function $\psi(x,s)$ such that:

$\frac{\partial \psi}{\partial s} +O \psi =0$

with the initial condition $\psi(x, s=0) = j(x)$, and assuming the regularity condition $\psi (x,s= \infty) \rightarrow 0$, then it is easy to see that the function $\psi(x) = \int_0^\infty \psi(x,s) ds$ satisfies the original equation we set out to solve.

Now, this new equation for $\psi(x,s)$ looks kind of familiar, if we are willing to overlook a few details. If we wish, we can think about $\psi(x,s)$ as a time dependent wave function, with the parameter s playing the role of time. The equation for  $\psi(x,s)$  could then be interpreted as a Schrödinger equation, with the original operator $O$  playing the role of the Hamiltonian. We are ignoring a few issues to do with convergence, analytic continuation, and the related fact that the Schrödinger equation is complex, and the one we are discussing is not. Never mind, these are subtleties which need to be considered at a later stage.

The point is that we can now use any technique we learned in quantum mechanics to solve the original equation – path integral, canonical quantization, you name it. We can talk about the states $|x\rangle$ and the Hilbert space they form, Fourier transform to get another basis for that Hilbert space, even discuss “time” evolution (that is, the dependence of various states on the auxiliary parameter s). We can get the state $\psi(x,s)$ by summing over all paths of a “particle” with an appropriate worldline action and boundary conditions. Depending on the problem, we may be interested in various (differential) operators acting on $\psi(x,s)$, and they of course do not commute, resulting in uncertainty relations. You get the picture.

This technique is sometimes called first quantization, or Schwinger proper time method, or heat Kernel expansion. Whatever you call it, it has a priori nothing to do with quantum mechanics,  there are no probabilities, Planck constant or any wavefunctions in any real sense. At this point we may be discussing the financial markets, population dynamics of bacteria, or simply classical field theory.

In the second pass, we can apply this idea to linear fields, generating solutions to various linear differential equations. Some of those equations are Lorentz invariant (Klein-Gordon, Dirac, Maxwell equations), but they have nothing to do with quantum mechanics, despite the original way they were referred to as “relativistic wave equations”.  Once we add spin to the game, we start having the fascinating structures of (worldline) fermions and supersymmetry (not to be confused with spacetime fermions and supersymmetry), and we are also in a good shape to make the leap from classical field theory to classical string theory. Maybe I’ll get to that sometime…