Feeds:
Posts

## Woof Woof

It’s very nice to discover humor in a paper that I thought I knew all too well. Above is a a Feynman diagram taken from said paper. Although not as cute as a penguin, it might compete with other dog renditions in popular culture.

## Unstable Universes

It’s a fine day for the Universe to die, and to be new again! Well, maybe not, but the Internet is abuzz with a reincarnation of the unstable universe story. (You can also see it here, or here, the whole thing is trending in Google). In other works, this is known as tunneling between vacua. And if you have followed the news about the Landscape of vacua in string theory, this should be old news (that we may live in a unstable Universe, which we don’t know). For some reason, this wheel gets reinvented again and again with different names. All you need is one paper, or conference, or talk to make it sound exciting, and then it’s “Coming attraction: the end of the Universe …. a couple of billion years in the future“.

The basic idea is very similar to superheated water, and the formation of water bubbles in the hot water. What you have to imagine is that you are in a situation where you have  a first order phase transition between two phases. Call them phase A and B for lack of a better word (superheated water and water vapor), and you have to assume that the energy density in phase A is larger than the energy density in phase B, and that you happened to get a big chunk of material in the phase A. This can be done in some microwave ovens and you can have  water explosions if you don’t watch out.

Now let us assume that someone happened to nucleate a small (spherical) bubble of phase B inside phase A, and that you want to estimate the energy of the new configuration. You can make the approximation that the wall separating the two phases is thin for simplicity, and that there is an associated wall (surface) tension $\sigma$ to account for any energy that you need to use to transition between the phases. The energy difference (or difference between free energies) of the configuration with the bubble and the one without the bubble is

$\Delta E_{tot} = (\rho_B-\rho_A) V +\sigma \Sigma$

Where $\rho_{A,B}$ are the energy densities of phase A, phase B, $V$ is the volume of region B, and $\Sigma$ is the surface area between the two phases.

If $\Delta E_{tot}>0$, then the surface term has more energy stored in it than the volume term. In the limit where we shrink the bubble to zero size, we get no energy difference. For big volumes, the volume term wins over the area, and we get a net lowering of the energy, so the system would not have enough energy in it to restore the region filled with phase B with phase A. In between there is a Goldilocks bubble that has the exact same energy of the initial configuration.

So if we look carefully, there is an energy barrier between being able to nucleate a large enough Goldilocks bubble so that there is no net change in energy from a situation with no bubble. If the bubbles are too small, they tend to shrink, and if the bubbles are big they start to grow even bigger.

There are two standard ways to get past such an energy barrier. In the first way, we use thermal fluctuations. In the second one (the more fun one, since it can happen even at zero temperature), we use quantum tunneling to get from no bubble, to bubble. Once we have the bubble it expands.

Now, you might ask, what does this have to do with the Universe dying?

Well, imagine the whole Universe is filled with phase A, but there is a phase B lurking around with less energy density. If a bubble of phase B happens to nucleate, then such a bubble will expand (usually it will accelerate very quickly to reach the maximum speed in the universe: the speed if light) and get bigger as time goes by eating everything in its way (including us). The Universe filled with phase A gets eaten up by a universe with phase B. We call that the end of the Universe A.

You need to add a little bit more information to make this story somewhat consistent with (classical) gravity, but not too much. This was done by Coleman and De Luccia, way back in 1987. You can find some information about this history here. Incidentally, this has been used to describe how inflating universes might be nucleated from nothing, and people who study the Landscape of string vacua have been trying to understand how this tunneling between vacua might seed the Universe we see in some form or another from a process where these tunneling events explore all possibilities.

You can reincarnate that into Today’s version of “The end is near, but not too near”. We know the end is not too near, because if it was, it would have already happened. I’m going to skip this statistical estimate: all  you have to understand is that the expected time that it would take to statistically nucleate that bubble somewhere has to be at least the age of the currently known universe (give or take). I think the only reason this got any traction was because the Higgs potential in just the Standard model, with no dark matter,  with no nothing more in all its possible incarnations is involved in it somehow.

Next week: see baby Universe being born! Isn’t it cute? That’s the last thing you’ll ever see: Now you die!

Fine print: Ab initio calculations of the “vacuum energies”  and “tunneling rates” between various phases are not model independent. It could be that the age of the current Universe is in the trillions or quadrillions of years if a few details are changed. And all of these details depend on the physics at energy scales much larger than the standard model, the precise details of which we don’t know much at all. The main reason these numbers can change so much is because a tunneling rate is calculated by taking the exponential of a negative number. Order one changes in the quantity we exponentiate lead to huge changes in estimates for lifetimes.

## Bad science reporting versus good science reporting

Today I was greeted with the following line

Quantum Gas Temperature Drops Below Absolute Zero

This is the way the news about a quantum system that is effectively at negative temperature was reported in Wired. The thing is, negative temperatures are hotter than any finite positive temperature.

One can also check this fact in the Wikipedia entry for negative temperature. The simplest system that has an effective negative temperature is a laser: to get a negative temperature one just needs what is called a population inversion.

In that report it was stated that

Previously absolute zero was considered to be the theoretical lower limit of temperature as temperature correlates with the average amount of energy of the substance’s particles.

The crucial mistake is the expression “Previously absolute zero was considered” which suggests that we have overturned theoretical physics knowledge on its head, because a new and revolutionary temperature below zero has been obtained! Moreover, we have measured systems with negative temperatures at least since the invention of the laser, and actually since the invention of the maser ( a laser in the microwave region).

The news is actually  well reported in Ars technica. In that article they actually explain facts correctly about how to think about negative temperatures.

## If some of my students were writing problems

$\vec v_A, \vec v_B; v_{AB}?$

## Currently at the KITP

Right now I’m in the midst of a program I helped to organize (and I’m still organizing) at the KITP. The program deals with the question of how to use numerical methods from lattice and gravity to make inroads into interesting (usually very hard) questions about quantum field theory (and quantum gravity) and the dynamics of the strong interactions at finite temperature (like in the heavy ion collisions).

We’ve had a lot of great talks about a wide variety of topics. Personally, I really liked the talk by Phillipe DeForcrand on the sign problem. The main reason I like it is because he had really simple examples that illustrate what the sign problem is all about. You can find it here.

And if you want to see what we’ve been hearing about, you can go here and see the full list of talks so far.

Not too long ago we discussed this paper on one of our informal seminars. The paper is called “The eighteen arbitrary parameters of the standard model in your everyday life” by R. N. Cahn, and the paper dates back to 1996. It is an RMP colloquia paper.

I think it still reads great and it explains what are the mysteries of particle physics and how making small changes in the Standard Model could lead to completely different physics.

Well, in those days there were only 18 parameters in the standard model. Now that we have neutrino masses there are a couple more. Because of this,  by many standards, this is considered prehistory. On the other hand, one can take this as a benchmark to calibrate  all the accomplishments of particle physics experiments since then.

What I like about the paper is that in some sense it gives a feeling for how non-generic the parameters of the standard model are.

## What’s in a picture

Photograph

Today’s guessing game is:

What is this a picture of?

## Faster than light neutrino claim

Well, the press is all fired up about a claim of faster than light neutrinos. The claim from the OPERA experiment can be found in this paper. The paper was released on September 22nd and it has already gotten 20 blog links. Not bad for a new claim.

Considering that the news organizations are happily bashing special relativity, one can always rely on XKCD to spin it correctly.

Now more to the point: the early arrival time is claimed to be 60 nanoseconds. The distance between the emitter and the observer is claimed to be known to about 20 cm, certified by various National Standards bodies. A whole bunch of different systematic errors are estimated and added in quadrature, not to mention that they need satellite relays to match various timings.

60 nanoseconds is about the same as 20 meters uncertainty (just multiply by the speed of light) and they claim this to be both due to statistical errors and systematics. The statistical error is from a likelihood fit. The  systematic error is irreducible and in a certain sense it is the best guess for what the number actually is. They did a blind analysis: this means that the data is kept in the dark until all calibrations have been made, and then the number is discovered for the measurement.

My first reaction is that it could have been worse. It is a very complicated measurement.

Notice that if we assume all systematic errors in table 2 are aligned we get a systematic error that can be three times as big. It is dominated by what they call BCT calibration. The errors are added in quadrature assuming that they are completely uncorrelated, but it is unclear if that is so. But the fact that one error dominates so much means that if they got that wrong by a factor of 2 or 3 (also typical for systematic errors), the result loses a bit on the significance.

My best guess right now is that there is a systematic error that was not taken into account: this does not mean that the people that run the experiment are not smart, it’s just that there are too many places where a few extra nanoseconds could have sneaked in.  It should take a while for this to get sorted out.

You can also check Matt Strassler’s blog and Lubos Motl’s blog for more discussion.

Needless to say, astrophysical data from SN1987a point to neutrinos behaving just fine and they have a much longer baseline. I have heard claims that the effect must depend on the energy of the neutrinos. This can be checked directly: if I were running the experiment, I would repeat it with lower energy neutrinos (for which we have independent data)  and see if the effect goes away then.

## Copy and paste and oh what a waste

Right now I’m in the middle of writing a long paper. This is a rather intense and time consuming effort that I don’t find particularly gratifying. On a good day, I can write a lot. But when I get annoyed at how something is organized I usually copy/paste and end up reorganizing things and that is usually not bad (it’s easier than retyping). However…

## Zombie days and Vampire nights.

Well, this is just a short personal post about my research. I’m really excited about some of the stuff I’m doing, but the details are still not ready for public consumption. That research is waking me up at night at random times (lets say 3 or 4 in  the morning) and then I have trouble going to sleep. In a certain sense, this must be how vampires feel: completely alert and awake at night with a  clear vision of what needs to be done and and h0w to do it (this is a typical romanticized version of vampires, which do exist in nature, but look nothing like count Dracula, which is the typical class of vampire that this post refers to).

This wakefulness at night has profound consequences for my days at work. Basically, I’m not getting enough sleep and I walk the corridors with a slight headache and a characteristic lack of brain function during the day. Essentially, the only thing that keeps me separated from being a true zombie is that I’m still technically alive and my body parts are not falling as I shuffle by in the corridors.

Getting a sufficiently high dose of caffeine is not doing the usual trick. So if you see me walking around like  zombie: don’t worry. It’ll be fixed during the midnight hours.