Today on the arxiv Oswaldo Zapata wrote an essay on issues about fact and belief systems in superstring theory. Naturally, Peter Woit decided that this was really important and wrote a whole article about it. Here I will collect a few recollections that serve as a rebuttal/complement to some of those discussions. Mostly, I feel implicated by some of this discussion since at least one of my works (together with Juan Maldacena and Horatiu Nastase) is mentioned as changing the history of an idea from belief to fact.

The whole issue is to quantify the following statement: the AdS/CFT correspondence is a* true fact*.

So what is one supposed to make of this? There is no formal proof of that statement. There is no theorem in mathematics that lets one go from quantum gravity on AdS spaces (whatever that is) to a dual field theory that is generally strongly coupled. The coupling constant being strong means that we are short of tools to provide reliable calculations for all applicable quantities that are relevant for the discussion. We are expected to show this not for just a few calculations. We are expected to show the complete equivalence of all observables and phenomena between these two *theories*.

So clearly, the AdS/CFT is not a fact in the mathematical sense. However, many practitioners consider it a fact. How so? There is overwhelming circumstantial evidence in its favor. If you allow me a little digression in the theory of probabilities, there is natural place for belief. This is at the level of prior distributions in Bayesian statistics. (Here I am taking the point of view of Jaynes’ book on Probability theory).

We can give the statement: *The AdS/CFT is true* some probability p of being correct. At the same time, the converse *The AdS/CFT is false*, would be assigned a probability of 1-p. This is our belief (bias). A subjective probability quantifies how much value we give to each of the two statements.

At issue is what value should p attain before we can comfortably state that the first statement is correct. For a mathematical proof the only allowed value is p=1. However, providing such a mathematical proof is certainly beyond what we konw how to do so far, so instead we have to ask what is the value that one could reasonably asssign to p given the current knowledege and how close to being a fact the statement really is.

If one begins life as a skeptic (circa 1997), one would start with a value where p is of order 0.0001 lets say. Now, a paper on November of 1997 claims that such a correspondence is true and provides various pieces of evidence for it. Given such an evidence, one should upgrade the probability p to reflect such evidence. However, we do not know how to quantify the usual posterior distribution for p (mostly because quantifying the conditional probabilities in the Bayesian formalism is not feasible in this case). However, it is clear that p should have increased. Here, it depends on various theoretical biases how much one decides to increase p. Basically, we have to evaluate the various conditional probabilities based on a subjective estimate.

For me, personally, the value of p became about 0.5 then. Of course, it was not immediate. Reading the first paper on the subject was really difficult at the time because I did not know what to do with that information. Fortunately, Gubser, Klebanov, Polyakov and Witten provided a dictionary to compare calculations. This made various tests computable. Using properties of various protected quantities they were able to match the supergraviity spectrum with a corresponding spectrum of operators in the dual field theory. Again, quantifying the advance in p is hard for the same reasons.

I would say then that p~ 0.8. Why? Because supersymmetrically protected quantities provide partial evidence, but by no means are a complete test of dynamics. However, given the match of an infinite series of data, it would be unfair to say that p didn’ change. Even after many papers came out, one could claim that in 2002, p was between 0.95-0.99. Again, that is a bias on computing conditional probabilities. Should one claim that p~0.999 that wouldn’t change much the argument.

Once a conditional probability implies that something is almost a certainty, one should be allowed to call it a fact (that can be subject to revision if future evidence against it is found). In physics we can never do better than this. In mathematics, it stays a conjeture until it is proved. This is the heart of the matter.

Our paper provided another `infinite series’ of evidence that the AdS/CFT correspondence was correct. This evidence did depend on the coupling constant of the Yang Mills theory and was very non-trivial (yet very computable once we understood what was the correct calculation to do). I remember that when we were discussing the paper before it was released with our collleagues at the Institute for Advanced Study, there was general skepticism on the claims we were making over the lunch table. This all changed the day after it was published.

For me, the AdS/CFT correspondence is a working assumption. I would claim that the current value of p is about 0.99999, but that is just my personal bias in the end. It certainly is very close to one.

After all, over 6000 papers have been written on the subject, and although puzzles have appeared along the way, there has not been any real evidence that the correspondence is fundamentally wrong. As a matter of fact, it is so hard to wrap one’s head around it that the idea has gained an enormous amount of traction from peoplee trying to understand what it means. Many of us feel this is a very profound statement cutting to the heart of how does a consistent theory of quantum gravity really look like as a fundamental theory.

I have spent many of the last few years trying to get my teeth into the hard problems of providing a better understanding of the strong coupling limit of Yang Mills theory on it’s own. I have been using the AdS/CFT correspondence to check various approximations in field theory and surprisingly they turn out to match a large number of phenomena. So depending on what I believe to be more true on a given day, I can take the results to mean that they provide evidence for the AdS/CFT correspondence, or evidence for the strong coupling expansions I am advocating.

What am I to do about this ? My techniques provide various self-consistency arguments, but there is no independent proof in the mathematical sense. I woud claim that the picture I have been working on is very compelling. Others have different approaches that use different assumptions (integrability for example) and have been able to predict the outcome of various perturbative calculations with this information used as a working assumption. The whole structure seems to be incredibly tight and really hard to penetrate.

I think the field has moved from the question: * Is this really true*? to *How does this really work*? and the validity of the statement can be taken as a fact (with the usual ex-provisos). So, is there no room left for doubt? Quite the contrary: doubts are one of the driving engines of progress. There are healthy skeptics and then there are flat out denialists and contrarians. I count myself on the first camp: I’m not yet satisfied with the evidence and I will keep on prodding the AdS/CFT correspondence until I am satisfied. As for denialists, I think they start with p=0 and will never update p according to evidence. They are fanatics, not scientists. The contrarians will claim p=0.5 until a proof is found.

Finally, there is the issue of communicating this information to the public. How does one do this? I really don’t know. But maybe these discussions will help.

on May 12, 2009 at 5:31 pmThomas LarssonI have posted this at a number of places, so far without any reaction. In short, it seems to me that AdS/CFT applied to the simplest nontrivial 3D model gives the wrong result.

In a recent discussion at Dimitry Podolski’s blog, I became aware of the paper arXiv:0806.0110v2 [hep-th]. Therein, the following statements are proven:

1. AdS/CFT makes a prediction for some quantities c’/c and k’/k, eqn (5).

2. This prediction is compared to the exactly known values for the 3D O(n) model at n = infinity, eqns (28) and (30).

3. The values disagree. Perhaps not by so much, but they are not exactly right.

The standare way of saying this is that the d-dimensional O(n) model does not have a classical gravitational dual, at least not in some neighborhood of n = infinity, d = 3, and hence not for generic n and d. There might be exceptional cases where a gravitational dual exist, e.g. the line d = 2, but generically it seems disproven by the above result. Also note that the O(n) model is one of the most important statphys models, which include the Ising, XY and Heisenberg models for n = 1, 2, 3.

I am on record of being skeptical about the physical relevance of AdS/CFT, but that was mainly because the premises do not seem to hold in nature: physical gravity lives in dS rather than AdS space, and physical QCD is asymptotically free rather than conformal. But the O(n) model at criticality is conformal, so the premises are satisfied, but the result is still wrong. If AdS/CFT does not apply to a CFT with infinitely many components or colors, when can it be trusted? And how do we know if it applies, if we don’t have an exact solution to compare to?

on May 12, 2009 at 5:54 pmdberensteinHi Thomas:

The full statement of AdS/CFT is that a gravitational theory represented by strings on a space that is asymptotically of the AdS form is dual to various large N gauge field theories. Now, a pure gravity limit can be attained only under certain special circumstances, which usually involve the string theory being exactly at the critical dimension. In this limit one can make predictions by computations in classical gravity.

I don’t believe the O(n) model is gauged, so if it has a string dual, it might be very bizarre. To the best of my knowledge it would not be a nice simple supergravity, so standard results appropriate to the pure gravity limit are not applicable. Personally it seems to me that in the absence of gauge redundancies, the AdS/CFT does not work. I could be very wrong about this, but that is my prejudice about this.

As far as I know, there is a lot of fine print in the precise statement of the AdS/CFT correspondence that we still have to decipher.

on May 12, 2009 at 11:44 pmonymousThomas, you might be interested in hep-th/0210114 by Klebanov and Polyakov, which speculates on the dual of the O(N) critical point at large N. The problem here is similar to that of finding a dual for large N QCD: there’s a large N limit, so you have good reason to hope for a dual string theory, but unlike in N=4 Super Yang-Mills you can’t go to extremely large ‘t Hooft coupling. This means most operators have small anomalous dimension, and in the dual there is a huge number of light fields. Because you can’t integrate out the “string modes”, there’s no simple low-energy effective field theory in which you can compute things. The types of universal predictions you cite depend on knowing that Einstein gravity is the dominant contribution, which is no longer true when there is this proliferation of light fields.

on May 12, 2009 at 5:33 pmPeter WoitDavid,

It’s interesting to hear your point of view on this. In general, I think lots of people would love to hear more about exactly what is known concerning AdS/CFT, as well as more general conjectural gauge/string dualities. It can be hard to extract from the literature an understanding of what precisely is solidly understood and what is still conjecture.

Personally, I’m not really a skeptic. With probability one it seems to me that there is some relation between these two theories, but it also seems clear that the full exact relation is still not understood. Oddly enough, two years ago when Princeton put out a press release about new results of Klebanov et al I remember being surprised since the specific result was something I had thought was already known. So, my beliefs about the evidence for AdS/CFT were stronger then the actual situation, and this is quite possibly still true.

AdS/CFT is a specialized topic which plays only a small part in the 25 year long publicity campaign for string theory as a 10/11d unified theory of particle physics. Zapata is writing not just about AdS/CFT, but about this much more speculative project, a project which has been an utter failure. He, along with many other string theorists these days, seems to me to be trying to confuse these two very different things. I think it will ultimately be a mistake for those in the AdS/CFT business to allow their work to be used to prop up the failed speculation that has ended up with the landscape fiasco. If they do this, the very public failure represented by the landscape will blow-back and harm their own subject. AdS/CFT is a perfectly conventional research program, allying it with the faith-based, PR driven string theory unification program that Zapata portrays doesn’t look like something that is going to end well…

on May 13, 2009 at 2:14 amdberensteinHI Peter:

I will try and make more posts on where the battle lines on the AdS/CFT are drawn.

Regarding unification via strings, I feel some disappointment on some issues, but I don’t agree with the statement that it is a completely failed project.

on May 12, 2009 at 6:12 pmnigeWow, are you the Berenstein who discovered that 5-d anti-de Sitter spacetime has an N-1 = 5-1 = 4 dimensional boundary just as a 3-dimensional image forms a hologram in 2-dimensions?

I love the idea that supestrings in anti-de Sitter 5-dimensional spacetime correspond to the conformal field theory of particle physics in 4-dimensional spacetime, with black holes in anti-de Sitter spacetime corresponding to real radiation on the 4-dimensional spacetime hologram (via losing one dimension). It’s a wonderful story to tell kids in place of the old fairy tales.

Can I just offer you lots of good luck with using AdS/CFT to approximately model strong interactions between hadrons? You’ll need it.

on May 12, 2009 at 6:31 pmdberensteinHi Nige:

I’m surprised that I could be credited with discovering that correspondence. I merely claim to be one of the links in providing evidence for such a conjecture. The discovery was due to Juan Maldacena in 1997, with various influences from Alexander Polyakov and Gerard ‘t Hooft.

Thanks for the good luck. As far as I am concerned, if I fail to unravel the strong interactions, I will be in very good company.

on May 13, 2009 at 12:51 amGeorge MusserThe application of Bayesian reasoning in this way strikes me as problematic, for all the reasons that any meta-analysis can go wrong. It would seem to favor any theory that has been the subject of a large number of papers. More importantly, whether a hypothesis is fact must be determined by experiment. But whether AdS/CFT is a fact seems besides the point right now. It has been a very useful correspondence and has opened a rich sector of theory to explore.

George

on May 13, 2009 at 1:09 amdberensteinHi George:

There is no other way in which I know how to quantify belief.

In some sense, Bayesian reasoning is the only sensible way to do meta-analysis in systems with very incomplete information. This is why I picked Jaynes’ book on probability. It explains it very well. (Not everyone agrees with this interpretation).

Now, just having a lot of papers does not get favored automatically. This is because of correlations. One has to claim enough statistically independent tests. Repetitions of arguments don’t count.

You can ask similar questions about probabilistic proofs in mathematics, where there is less issue on quantifying probabilities.

on May 13, 2009 at 7:20 amRobertI think it is a very valid discussion to argue about the validity of certain statements of some theory although I don’t believe it is really useful to attach (quasi) quantitative measures like probability (philosophers have tried this for quite some time and failed for all numerical values other than p=0 and p=1) but more in the “beyond reasonable doubt” category.

Without doubt, it is always a far too strong criterion to require a mathematically sound “proof” for non-trivial statements in physics. Otherwise, I could as well claim that we have no proof that QCD (with quarks) really describes protons and strong interactions.

I scientifically grew up in Hamburg where there was a strong group working in axiomatic quantum field theory (of Haag Kastler type). There especially the younger ones (as always there is the tendency to mostly repeat and amplify the most extreme statements of their masters, same btw in string theory) had serious doubts about any type of perturbative qft like for example QED given the fact that the perturbation expansion is known not to converge etc etc. They wanted mathematical proof and were able to completely not notice the impressive numerical agreement between perturbative calculations and experiment. Still, they were very nice people and also had some deep physical insights and I am still thankful for learning a lot from them.

But what really annoys me is when bold statements on the status of theories (like strings) are derived from quotations from introductory sections of overview papers and text books, like the ones about the status of quantum gravity within string theory. Not only that the author of the essay seems not completely understand the concept of renormalisability (and why having the tower of massive states could improve the UV behaviour) but he does not give enough credit to the fact what it means to have an interacting theory with spin 2 particle for which we have good evidence to be UV finite. Not to mention the sigma-model beta-function type arguments he is completely ignoring.

For AdS/CFT, I think there are a number of valid questions that need more understanding: One is work out even more which part of the evidence is truly dynamical and which part is kinematic (in the sense that obviously both sides of the correspondence share the same supergroup containing SO(4,2)xSO(6)). This is not to be underestimated: My remote understanding of all this integrability business is that a lot of it is due to symmetry and that it is only the “dressing factor” that depends on dynamics. The other question would be at which generality we should believe the AdS/CFT correspondance: The strictest would be to only claim validity for AdS5xS5 and infinite N and at the other end would be some very general framework applicable to any type of theories with some type of holographic relation (and no obvious connection to string theory at all).

But it a fact that empirical sciences (yes yes I count string theory in that category) are based on the consent of the scientific community and not on proof. But this realisation is not new, Thomas Kuhn’s book are always a good read. And yes, empirical sciences work.

on May 13, 2009 at 11:08 amHaelfixConsent of the scientific community is a good criteria, but its also historically failed on a number of occasions. I always like to give the example of astrophysics and the ‘islands of matter’ point of view regarding galaxies/stars etc. Its amusing how we weren’t even close. Still, that was the best point of view given the available evidence at the time and I probably would have shared the wrong view if I had been alive (and for the record I have no doubt that AdS/CFT is correct).

Otoh, bayesian probability strikes me as problematic to use. Not that I don’t believe that conditional probabilities can and will improve things, I just don’t know how to value them quantitatively in a way thats systematic, especially dealing with the theory side. Of the infinite amount of ways AdS/CFT could go wrong, we only have finitely many calculations that its correct.

on May 13, 2009 at 11:36 amGiotisI don’t find this assigning of probabilities (based on gathered evidence) an appealing idea. Someone could always argue that the evidence could be circumstantial.

I think that for a true validation beyond any reasonable doubt it is necessary a better understanding of the deep underlying theoretical principles that lead to the correspondence i.e. how and why AdS/CFT works. Presently I’m not sure that these principles are well understood.

on May 13, 2009 at 3:37 pmdberensteinHi Giotis:

This notion of probability in this context is a model for how people make decisions and come to conclusions. Your statements about circumstantial evidence reflect a (neural) bias on the assignment of conditional probabilities. The notion of probability that Jaynes advocates is that probabilities are the devices you use to place bets on different outcomes. Given more knowledge, you update your betting strategy. After reading about it for some time, I became convinced that this was the best way to interpret situations with very incomplete information.

on May 13, 2009 at 6:19 pmGiotisOk I understand your argument but my point was that if the theoretical mechanism behind AdS/CFT is not sufficiently understood and explained the AdS/CFT case will remain open.

If you could ask someone in the 19th century to assign a probability on whether the Newtonian model of gravity is true, he would have assigned most probably (based on the gathered evidence) a very high number. But the physical model was not true. The correct question to ask in my opinion is why the model is true and how it works. What are the deep theoretical reasons which lead me to conclude that the model is correct? That is what is real important. If you had asked this question to the guy from the 19th century he would not be able to give you a convincing answer exactly because it was not explained theoretically and sufficiently why and how the model works. He could only argue that based on all the gathered evidence the model works and thus it is correct. GR on the other hand for example provides a very convincing and elegant theoretical structure to support the evidence. It gives a convincing theoretical and physical explanation.

Bottom line: If I was a researcher with the task to validate AdS/CFT, I would not try to gather more evidence (data) for the correspondence but instead I would try to explain the exact mechanism of the correspondence and the theoretical principles underneath that justifies it. If there is a theoretical gap the evidence are never enough.

on May 13, 2009 at 4:12 pmMosheI also find Bayesian probability problematic. The only way I know to precisely interpret probabilities is as relative frequencies in an ensemble. In the Bayesian context I get that .85 corresponds to high confidence, but I see no operational way to distinguish it from, say .86.

As for the AdS/CFT issue, I think the question is framed the wrong way. You never have to make a decision of what is right and what is wrong, and I am happy to stay forever agnostic about the question when framed this way. What you do have to decide is which directions of research are likely to be productive and interesting. In the time when AdS/CFT had partial evidence, it was interesting to think whether some partial version of it is correct, and to form tests of that. As the correspondence passed more and more of those tests, this class of issues seems less likely to produce new insights. Currently, as David points out, it seems much more productive to assume the correspondence and ask for its implications. Framing that collective decision as a “belief” is missing the point.

on May 13, 2009 at 4:59 pmMosheLet me state my confusion about Bayesian analysis more precisely. In any application of probabilities, I think it is only sensible to talk about them with a finite precision, which depends on the context. In first year labs we get drilled about the number of significant digits in any result, and I think we have to carry that lesson in our minds whenever we speak of probabilities. In a frequency based approach the precision in which you are allowed to speak of probabilities is related to the (necessarily) finite ensemble you have in your application. In the Bayesian approach, I see no way to decide how reliable any estimate of the probability is. When you say that your degree of confidence in AdS/CFT is “0.99999”, how many of these 9s I’m supposed to take seriously, and when does that become indistinguishable from 1?

on May 13, 2009 at 6:25 pmdberensteinHi Moshe:

These are the odds I would place on the result being correct: 100000 to 1 (it doesn’t mean I would place such a bet because of monetary constraints). So if someone puts a hundred to one bet I would take it, but I would not take a bet of a million to one.

Saying that the probability is one means I would be willing to pay

a finite amount for nothing in return if I get proved wrong (I’ll take any bet against it).

If you want to, you can use the 5 sigma rule to call something true.

on May 13, 2009 at 8:16 pmLuboš MotlDear David, I agree with Moshe, as was seen here,

http://motls.blogspot.com/2005/12/bayesian-probability.html

http://motls.blogspot.com/2006/01/bayesian-probability-ii.html

Don’t you think that your unwillingness to bet 100,000 to 1 in favor of AdS/CFT mostly reflects your personal psychology – a kind of precautionary principle (protecting you against a deadly bet) – and not any actual, objective facts and insights about science?

You say that you can use the 5-sigma rule to call something true but I think that Moshe thinks, just like I do, that you cannot even objectively say whether something is Bayesian-true at the 5-sigma level. How do you really decide that? Is AdS/CFT 5-sigma true?

Don’t get me wrong – I think that the Bayesian formula is a fine formula dictating the optimum way to deal with some evidence when deciding about a hypothesis. But the subjectivity comes in all the details – what hypotheses are a priori possible and what their priors should be; which evidence should be considered at all; which evidence should be considered independent of other evidence; how far repeatedly observed successes can be extrapolated as a general fact.

There’s no canonical objective way to decide about any of these questions. So any Bayesian probability – including your 5-sigma criteria – that has no frequentist interpretation will remain subjective, not a part of objective science, won’t it?

on May 13, 2009 at 9:22 pmdberensteinLubos:

I would really urge you to read Jaynes book. He does a much better job explaining it than I do.

Given a finite set of choices, there is way to decide the optimum way of doing things, using information entropy (you are supposed to maximize it).

Given a possibly infinite set of choices which we can not enumerate nor fathom, you have to give some priors and run the machine. Otherwise you get stuck at step one. Under good circumstances, the initial priors rapidly disappear and you are left with meaningful information (this is why we can give confidence intervals for parameters in experiments).

In the end I think one can not hide from the fact that various statements are subjective. But if we are honest about them, then there is still scientific value in the process.

on May 13, 2009 at 6:30 pmSuccessful Researcher: How to Become OneGreat post! Thanks!

on May 13, 2009 at 8:05 pmMosheDavid, I still don’t get it. The process you describe in getting a confidence level seems to have noise of order one. I can imagine two separate people getting to dramatically different “betting odds” using the same evidence. This is fine when you discuss e.g. the financial markets, but when for example trying to interpret probabilities in the context of eternal inflation, presumably as some attribute of nature, and taking even exponentially small differences very seriously, I’d need something much more precise that just the process you describe of updating your confidence level using new evidence.

on May 13, 2009 at 8:54 pmdberensteinYes. It has noise of order one. But there is some consensus. We need four conditional probabilities:

~~P(AdS is true| test is passed)= 1~~P(AdS is true| test is not passed)=0

P(test is passed|AdS is true)= 1

P(test is not passed|AdS is true)=0

(by definition)

~~P(AdS is false|test is passed) = q~~P(AdS is false|test is not passed) = 1-q

P(test is passed| AdS is false) =q

P(test is not passed|AdS is false) = 1-q

The issue is estimating the second set (which is where bias and incomputability is introduced). Obviously q<1 for the test to be meaningful. If you believe that a test is strong, then q<1/2 and you update accordingly. If you believe a test is very strong, then you set q<.1 (lets say). You still have to correlate q with the previous information.

The thing that is hard to understand is all the possible ways in which a coincidence could have passed as a test that would make q close to one, rather than in the other direction. This is why there will be large variance. A contrarian would take q close to one always ( meaning no test is strong, but any value of q<1 would disprove the conjecture in the case of a failed test).

Get enough of these (strong) tests that you consider independent and then

p(AdS is true)

becomes close to one in a meaningful way even in the presence of a large community with different biases. So long as most of them have the occasional q small, the bets would be stacked very favorably in the direction of the conjecture (unless you were so initially biased against it that no matter how much you update you would never accept the theory as being probable).

It is often said that theoretical physicists do not believe in coincidences, and every piece of evidence should be taken as a strong hint.

As for measures in eternal inflation, I don’t believe that they are meaningful: we do not have an ensemble of universes on which to measure. As far as I am concerned it is an undecidable problem.

However, the mathematical conjecture of AdS/CFT has a number of people working on it and one can imagine progress in the future, so it makes sense to take a bet on it (or certain technical aspects of it).

on May 13, 2009 at 9:19 pmLuboš MotlDear David,

first, I think that you have mixed up a pair of the probabilities and made an incorrect identification of two of them. In my opinion, the correct four conditional probabilities should have been:

P(AdS is true| test is passed) = q1

P(AdS is true| test is not passed) = 0

(by definition)

P(AdS is false|test is passed) = q2

P(AdS is false|test is not passed) = 1

The two “uncertain” probabilities, q1 and q2, are independent: their value is very important and influential, especially when we are trying to decide that a certain number of tests can be extrapolated to a general fact. None of them is really known and it is also unknown which tests can be counted as independent. The prior probabilities are unknown, too.

Note that the conditional probabilities of hypotheses being true are never 1. That’s why we can never be sure – after a finite number of tests – that a theory is right in science (despite your wrong formulae that suggest otherwise). We can only be sure that we’re wrong. See also Mr Feynman explaining this simple fact:

Second, I don’t quite understand how there can be a “consensus” about a quantity whose error is of order 100%. Isn’t it a textbook description of a situation where there cannot be any consensus? This debate itself suggests that there is no consensus, and if there exists one, it probably disagrees with you. There’s no consensus whether the Bayesian probability has an objective meaning (it doesn’t) and no consensus whether AdS/CFT is 5-sigma true (it is!). There’s no consensus because Bayesian probabilites are subjective.

It seems to me that the rest of your comment is affected by the four incorrect conditional probabilities in your list, including the possibility of a complete proof of a hypothesis by a finite number of tests. The only thing that seems unaffected is your opinion that a large number of tests can compensate for a reduced “influence” of one test in the eyes of a “contrarian”.

I beg to disagree. You can make a large number of tests and they pass but there is no way to show that they’re really independent. Actually, when things – like AdS/CFT – are understood more completely, it always becomes clear that the tests were really not independent but mostly manifestations of the same underlying mechanism. The more we know, the less independent the individual observations and predictions become.

So a “contrarian” won’t reach the consensus after many tests because she will also say that the tests were not independent, so the probability that the conjecture is correct shouldn’t have been pushed towards one too tightly.

Moreover, the very label “contrarian” incorrectly suggests that you can objectively prove that she is biased. You can’t; the label is demagogic. She may very well be right. The tests that you consider independent, and moving the probability that the hypothesis holds very close to one, can turn out to be a repetition of the very same test – they almost certainly will – and the real problem that decides about the validity of the hypothesis can be found completely elsewhere – in completely different tests – sometime in the future.

Most obviously, the correct probability that a well-defined conjecture is correct is either 0% or 100%, so any number in between is clearly incorrect. ;-)

Best wishes

Lubos

on May 13, 2009 at 9:28 pmdberensteinHI Lubos:

My conditional probabilities are correct. However, my brain misfired. Your conditional values are correct.

What I meant originally was

p(test is passed|AdS is true) =1

p(test is not passes|AdS is true)=0

etc.

Test passed and test not passed are mutually exclusive, so the probabilities must add up to one in any situation. One of them being zero forces the other one to be equal to one, etc.

The priors are up for grabs (as always).

on May 13, 2009 at 9:58 pmLuboš MotlDear David,

once again, the key mistake in your four probabilities – that you may have overlooked again – is that you wrote 1,0,q,1-q, but the correct ones should have been q1,0,q2,1. You permuted them. There is no “1” among the first two figures (that the hypothesis is correct). Please just look again.

Now, concerning the independent point: Oops. Both of us was 50% right. I actually agree that for one test, q2=1-q1. The probability (even conditional probability) that something is false is one minus the probability that it is correct. ;-) So the last two probabilities among the four that you wrote are redundant (which confused me because you shouldn’t have written them at all). Still, you have permuted them in a seriously wrong way.

The Jaynes book is available online:

http://omega.albany.edu:8008/JaynesBook.html

I agree that scientific ideas about something may converge to probabilities being one or zero – but there is no objective way to decide about the speed of convergence, and whether it keeps converging, because there is no way to decide whether the new evidence is independent of the old evidence and because the evidence should certainly not be “double counted”.

If there is an a priori reason that without assuming the hypothesis, all the tests should generate random and independent results, the tests are independent, and you may increase your certainty by repeated tests (this is also the regime when a frequentist interpretation of the probability emerges). But that’s surely not the case of the AdS/CFT or any other hypothesis in theoretical physics – because all the tests are related in various ways and one must be extremely careful not to double-count.

So while I think that AdS/CFT is 5-sigma true (according to my psychological Bayesian measures), I don’t think that the very number of 5,000 papers supporting it is enough to show this fact, even if all of them contained different successful tests. It’s because the tests are simply not independent. The more we understand things, the less independent they become. (On the other hand, one test may often be enough to nearly settle the case if the a priori probability that it passes was very low, but it did pass.)

People could have tested the Newton’s law with one planet, another planet, … eighth planet. It would still work. But testing one million asteroids would not make the probability that Newton’s theory fails insanely unlikely. To show that Newton’s theory is actually not quite right, one must make completely different tests – high-precision measurements, relativistic setup, quantum measurements, or dark matter (assuming that MOND will be shown partially correct for a while), etc.

In the past, the same qualitative orbits with different numerical distances of planets would be viewed as different tests. We’re smarter today, so we talk about the functional laws and the whole shape of the function is “one test” of “one law” (where you substitute different numbers) and trying the same with two planets won’t replace “p” by “p squared” for the probability that Newton’s theory fails. As people have been (and still are) unifying physics, our knowledge was getting increasingly “functional”, so previously independent tests (and concepts as well as phenomena) became manifestations of the same thing, and we needed to invent completely new tests to look for a possible breakdown of the old theories.

In the AdS/CFT context, one could write 20,000 papers about 3-loop tests that would look at very different quantities of many kinds, and they would pass. But that would not be enough to reduce the risk of failure to zero because there could exist a very good reason why the 3-loop quantities agree but the map breaks at 4 loops or 2009 loops. The very choice of the tests and their compositions to find the truth quickly is not known.

I think it’s still true that the only case when the numerical value of Bayesian probability starts to make sense is when you can interpret it as a frequentist probability, too. If the frequentist picture is impossible, then priors are far from the only factor that makes it impossible to define the Bayesian probability objectively.

Best wishes

Lubos

on May 13, 2009 at 9:59 pmdberensteinHi Lubos:

Sorry it took a while. II fixed it. I got the notation backwards, which leads to an incredibly silly misunderstanding.

on May 13, 2009 at 10:04 pmdberensteinOh Lubos:

By the way, we are really in agreement on all the important details

on May 14, 2009 at 7:38 amLuboš MotlNice to hear, David, and expected. So do you also agree with Moshe because I essentially do and I want to see how much transitive the relationship is. ;-)

on May 13, 2009 at 11:24 pmJust LearningSince we are talking about odds, people should use the logit function

logit (p) = log (p/1-p)

http://en.wikipedia.org/wiki/Logit

Save David gives ADS/CFT a value of logit = 5 using base 10

I think that would sound cooler.

on May 13, 2009 at 11:29 pmJust LearningSave David gives ADS/CFT a value of logit = 5 using base 10

should be

So David gives ADS/CFT a value of logit = 5 using base 10

on May 14, 2009 at 6:11 amThomas Larssononymous, I recall seeing the Klebanov-Polyakov many years ago. I was more excited about the title than the content.

David, there is a difference between “the AdS/CFT correspondence is a true fact” with and without the addition “but it does not apply to spin models”. It is exactly such caveats that tend to be lost in the transmission to non-specialists. Besides, I doubt that all practioners agree with your pessimism, since some people are evidently trying to apply it to spin models. And AdS3/CFT2 does apply to 2D spin models, right?

Anyway, I don’t think that anyone really doubts AdS/CFT per se, but rather its domain of useful validity. Spin models apart, it is unclear to me if AdS/CFT can be usefully applied to non-susy QCD; even if a string dual does exist, it might very well be so complicated that you cannot really compute anything with it.

What I seriously doubt is that you can use AdS/CFT backwards, even in principle, as a means to understand physical gravity. This is not really based on the coincidence that the cosmological constant happens to have the wrong sign. Rather, it is because I believe in locality. Both QFT and GR are local theories in an appropriate sense, and it would be weird if what happens here and now is best described in terms of data living on a holographic screen in an AdS region of the multiverse. In fact, my belief is locality was so strong that I discovered the mathematics necessary to reconcile it with spacetime diffeomorphisms :-)

on May 14, 2009 at 7:30 amHaelfixAnother way to think about the AdS/CFT correspondance and imo its biggest plus is that it wasn’t immediately falsified.

Consider a book of hamlet and a bunch of monkeys typing random letters in a typewriter (eg the undergrad problem we all did in stat mech) and producing books that have the same cover and that appear the same except for the contents within. And lets throw the books in a pile. We are interested in finding Hamlet within the pile.

Lets also assume that you can only read the first sentence of each book that you randomly pick up.

Suppose that after a few misfires, you pick up a book and the first sentence that you read off is exactly identical to Hamlet. What are we allowed to conclude? Almost certainly the odds are pretty astronomically high that indeed we found Hamlet afterall, but still we only know that we have found 1 sentence that coincides. In order to precisely measure the odds that we are correct, we need to know how many total books are available, the amount of letters in the alphabet, the amount of remaining words and so forth. A frequentist would need to know the details of the ensemble in order to work, whereas a Bayesian will still be able to give some sort of number.

on May 14, 2009 at 7:47 amMosheThis last example is a good one, and also pretty funny, actually. Hey, if giving a number is so urgent, I can give one as soon as you ask me, you can even spare me the details if you want to save time :-). Can’t guarantee the number means anything though, because, well, it doesn’t take into account all those pesky details.

on May 14, 2009 at 1:56 pmLuboš MotlDear David,

when you scratched the four conditional probabilities, in order to fix them, you reverted them! ;-) What you have now are the predictions of AdS/CFT (or its negation) for the tests. But that’s not what’s directly relevant for deciding whether the AdS/CFT hypothesis holds.

To decide whether “AdS” holds, you really wanted conditional probabilities like P(AdS_is_true|test_worked) – in words “probability of AdS/CFT given the test”. By Bayesian inference,

P(AdS|test) = P(test|AdS) P(AdS) / P(test)

where P(AdS) was the prior probability for AdS/CFT to hold and P(test) is the marginal probability that the test works – as it did – according to the “average” hypothesis. Both P(AdS) and P(test) are affected by subjective choices.

Best wishes

Lubos

on May 14, 2009 at 4:19 pmdberensteinYou are right.

I still think it is better (operationally) to remember our updates to probabilities are given by posterior distributions for probability given new evidence:

P_pos (A) = P_prior(A) P(B| A)/(P_prior(A) P(B|A)+P_prior(-A) P(B| -A)).

In the variables I was using above, a successful test gives

p_{pos}(AdS is true) = p /(p+(1-p) q)

p_{pos}(AdS is not true)= (1-p) q/(p+(1-p) q)

which clearly shows that the probability of the statement being correct increases after a test is passed, so long as q is less than one and p is greater than 0.

These new numbers are then taken as priors in the next iteration. It is hard to imagine how to compute P(AdS|test) directly, without the rules for statistical inference.

on May 14, 2009 at 5:28 pmLuboš MotlRight, David, I think that your formulae and interpretations are correct. Well, p is hard to know but it only affects us once. On the other hand, q affects us many times and it is still hard to know. ;-)

What is “q” for a test of 4-loop corrections to the dimension of a BMN operator? I can even give you the operator, you are a 1/3 shareholder of the operator, and your only simple task is to calculate “q” (that the tests passes even if AdS/CFT is wrong). Isn’t it easier to calculate the dimension? :-)

Next, I give you another BMN operator which is slightly more complex, more excitations etc. Is “q” going to increase or decrease assuming that we know that the previous test passed? On one hand, it is a harder test, on the other hand, it is a similar test to the previous one.

All these things are tough. I think that everyone has some expectations about her “q” based on analogies with previous tests of related calculations (or measurements). But it is never clear which analogies and generalizations are right…

on May 14, 2009 at 7:50 pmsean hjust a small comment. as i think i posted before, i think that

http://arXiv.org/pdf/0811.2081

is some of the strongest evidence yet for ads/cft being true. it matches a quantity (a power law and a coefficient of 1.89) at finite temperature using montecarlo lattice simulation on the one hand and black hole geometry on the other. there is no supersymmetry or integrability or anything of that sort involved. it is very hard to see how it could be true by accident…

and Thomas, AdS/QCD already *has* been useful in QCD. it gets an answer for the shear viscosity over entropy density that is closer to the experimental value than any other theoretical computation. of course, it is almost certainly not the correct value, but getting a number in the right ballpark “by any means necessary” has been useful for the nuclear physics community (by their own admission).

on May 14, 2009 at 11:40 pmPeter ShorI looked at the plot in the paper referenced in Sean H.’s comment (around the only thing I’m competent to judge in the paper) and there’s no way this should be described as matching the value of 1.89. It does look like it’s slightly smaller than 2, so matches the theoretical prediction that way, but calling it a match of 1.89 would be completely unacceptable in an experimental paper. Where are the error bars on 1.89?

on May 15, 2009 at 6:15 amMosheThis is a perfect comment to illustrate how a given evidence might be given radically different weight (q) depending on one’s background. For example, when you evaluate the evidence Sean mentioned, your value of q increases for every paper you ever read where people struggled to get within the right order of magnitude for a generic strongly coupled observable.

on May 15, 2009 at 5:37 pmLuboš MotlDear Moshe,

I completely agree. Concerning 1.89, they could have been somewhat clearer. This precise number apparently has error margin around 0.03 in the non-analytic side. The situation is worse than Peter Shor suggests: the paper they cite as containing the figure doesn’t contain 1.89 at all.

On the other hand, getting this coefficient approximately right is amazing – and extremely unlikely to occur by chance. It’s not just one number. As eqn (9) indicates, it’s 1.89 T^{-3/5} lambda^{1/5} for the logarithm of the Wilson loop.

The validity of the power laws is nontrivial by itself, the right exponents are very unlikely, and the coefficient is even more unlikely because it could a priori contain things like “d0”, explained below equation (6). The numerical value of “d0” is 73,000 (seventy-three thousand). It’s pretty nontrivial to get the right power law with a sensible coefficient.

There are also more analytical matches – with all the highly transcendendal amplitudes – and lots of “qualitative” matches about the spectrum of branes, excited states, black holes, their properties etc. To get a good idea how the new successful tests reduce the probability that the conjecture is wrong, one has to be a kind of expert, and even if she is, she is never a perfect expert and all the conclusions are necessarily imperfect and subjective. A perfect expert would already know the right answer for sure – and the right answer is almost certainly that the AdS/CFT is right. ;-)

Best wishes

Lubos