Feeds:
Posts
Comments

Archive for the ‘Experiments’ Category

Well, the press is all fired up about a claim of faster than light neutrinos. The claim from the OPERA experiment can be found in this paper. The paper was released on September 22nd and it has already gotten 20 blog links. Not bad for a new claim.

Considering that the news organizations are happily bashing special relativity, one can always rely on XKCD to spin it correctly.

Now more to the point: the early arrival time is claimed to be 60 nanoseconds. The distance between the emitter and the observer is claimed to be known to about 20 cm, certified by various National Standards bodies. A whole bunch of different systematic errors are estimated and added in quadrature, not to mention that they need satellite relays to match various timings.

60 nanoseconds is about the same as 20 meters uncertainty (just multiply by the speed of light) and they claim this to be both due to statistical errors and systematics. The statistical error is from a likelihood fit. The  systematic error is irreducible and in a certain sense it is the best guess for what the number actually is. They did a blind analysis: this means that the data is kept in the dark until all calibrations have been made, and then the number is discovered for the measurement.

My first reaction is that it could have been worse. It is a very complicated measurement.

Notice that if we assume all systematic errors in table 2 are aligned we get a systematic error that can be three times as big. It is dominated by what they call BCT calibration. The errors are added in quadrature assuming that they are completely uncorrelated, but it is unclear if that is so. But the fact that one error dominates so much means that if they got that wrong by a factor of 2 or 3 (also typical for systematic errors), the result loses a bit on the significance.

My best guess right now is that there is a systematic error that was not taken into account: this does not mean that the people that run the experiment are not smart, it’s just that there are too many places where a few extra nanoseconds could have sneaked in.  It should take a while for this to get sorted out.

You can also check Matt Strassler’s blog and Lubos Motl’s blog for more discussion.

Needless to say, astrophysical data from SN1987a point to neutrinos behaving just fine and they have a much longer baseline. I have heard claims that the effect must depend on the energy of the neutrinos. This can be checked directly: if I were running the experiment, I would repeat it with lower energy neutrinos (for which we have independent data)  and see if the effect goes away then.

 

 

 

 

 

Advertisements

Read Full Post »

If you don’t know this, Xenon 100 released their most recent limits on dark matter. This was covered in the New York Times, and also by Sean. The data can be found in this paper in the arxiv: “Dark Matter Results from 100 Live Days of XENON100 Data“.

The main key of these experiments is to keep some region where data is supposed to be located at completely hidden from the experimentalists until they have decided how to account for all the systematics with the control region. Once they do this, they unblind the data and hope that enough events are in the discovery region to have a discovery. These regions also can have some events that are due to other sources which the experimenters have tried as hard as possible to remove. The data was unblinded about a week ago, and everything was essentially ready for writing a paper. This last step is very fast, because there are very few events expected anyhow.

Previous to the end announcement: that no dark mater was found yet in the Xenon direct experiment, I had been hearing rumors all week long that Xenon 100 had actually seen something.

Well, you can call this the sociology of the field: some people might have been expecting something and claimed that it was in the data. Otherwise its a publicity stunt. There are two experiments with claims of detection: DAMA/LIBRA and COGENT. Many people are very suspicious of the first, and a lot of them should be also worried about the second. With the new data it seems as if both positives are ruled out unless one has a very contrived model of dark matter.

 

 

 

 

 

Read Full Post »

Some HEP news

For those of you who are following the news, here is a link to the recent HEPAP P5 meeting report recommending the extension of the Tevatron run.

Read Full Post »

Clifford Johnson pointed to me his post on the quest for perfect quantum fluids. In a certain sense, we are used to thinking about fluids as low energy phenomena (relatively low temperature physics). Famous fluids are characterized by fun properties like superfluidity, or ferrofluids that can be a lot fund to play with in an exhibition. The most perfect fluids will be those with little to no viscosity \eta (viscosity is sometimes related to friction, but this can be misleading).

The recent experiment of RHIC that has claimed detection of the quark-gluon plasma also produces some type of liquid with very low viscosity. To compare how this hot liquid compares with a cool liquid one also needs to measure the entropy density s. The quest of who is more perfect than whom depends on the ratio

\frac \eta s

Whoever gets the smallest value wins. These are difficult quantities to measure, but they can sometimes be estimated from other known data. From the point of view of theory, this figure of merit is the one that allows comparison of various theories with different numbers of microscopic degrees of freedom, and it is suggested by various gravity dualities (this way of  comparing fluids came from the work of Kovtun, Policastro, Son, Starinets around 2001-2003, in various papers that have made a big splash in physics).

There is an issue of Physics Today that is dedicated to the topic of perfect fluids from various points of view. The readers of this blog might want to wander there and look at the expository articles on the subject. Room will be left open for discussion and questions, although I don’t promise that I will be able to answer them.

Read Full Post »

So what happened?

In the New York Times it was announced that a certain cosmic ray experiment is delayed. This is the AMS experiment. This delay might require the space shuttle to be used  beyond its official phase out date. The delay is caused because they are changing some powerful superconducting magnet for a much weaker  permanent magnet, thereby reducing (quite substantially it seems) the specs of the equipment.

Someone told me that they’d much rather have 3 years of very good data than a much longer run of poor data, so it seems that some big failure happened with the equipment.

Read Full Post »

Physics usually progresses by getting new experimental data. Given this data, we refine the theories and eventually we can come up with a picture of how the universe works. However, experimental results can be tricky to interpret. Usually, data is presented as evidence for something, but that depends many times on the model of the noise that is expected.

My most recent encounter with this aspect of physics was the recent paper on dark matter detection by the COGENT collaboration.

The paper states that they see evidence for dark matter in their results. A lot of evidence as a matter of fact. This is from a trial run on a new low noise technology before a full detector is commissioned. Being naturally somewhat skeptic, I raised my right eyebrow a bit more than usual and I hurried one floor down to the High Energy experimentalists to ask how should these new results really be interpreted: is it evidence? Or is it possible that the data reported is a bit too optimistic?

Part of the problem is that when I see the graphs, it is not obvious to me what to look for: this is mostly because I don’t usually deal with this type of data. This is when having colleagues who understand these issues can help a lot. Their expert advice really counts for something. I thought it would be a good idea to share some of this information. (more…)

Read Full Post »