At some point I promised that I was going to write about my most recent paper. So here is my promotion. In a sense, that paper is an exercise to understand what does it mean to have quantum gravity in a setup of emergent geometry: this is a situation where geometry is not there a priori, but it is extracted from some collective behavior of a system. I don’t want to go into semantics of what emergence means. For our purposes it is something that is extracted from a non-trivial procedure in systems with a lot of degrees of freedom, where we extract stuff that involves all degrees of freedom simultaneously in a non-trivial way. The system is quantum mechanical, and therefore there are quantum fluctuations and whatnot, and the whole purpose of our study is to measure some property that can be associated with a distance, but taking into account that the measurement will give you some type of probability distribution on some variable that is supposed to be geometric. Instead of getting something where this is all done analytically, we did it by computer simulations and ran it like an experiment. To top it off, the research was done with an undergraduate student, who ran the simulations and did some of the basic data analysis of the numbers we got.

Here is the longer version.

Preliminary remarks:

I’m using the word quantum in the sense of quantum mechanics. This means that we need a Hilbert space of states, and in this Hilbert space of states we get particular states. In the case of quantum gravity, for each quantum state you get one universe so to speak. The states can be superposed, you can have dead/alive cats, there can be interference and in the end you compute probabilities, or probability distributions.

Most interesting Hilbert spaces are infinite dimensional, and they all look the same to the untrained eye. Heck, even to the trained eye they look the same. Thus, one can not just do random quantum mechanics with a random Hilbert space and see what happens. You need extra structure that lets you make meanigful statement about these states. This usually shows up into having a preferred set of variables (let us call them x for lack of a better name) and then one can write a wavefunction of these variables, usually denoted by

A wave function defines a state so long as it satisfies some basic properties, and given a state, one can measure various things that depend on x (remember that x is a set of variables).

The second point to make is what does gravity mean above? Well, the modern understanding of gravity is based on the geometry of spacetime being curved. So if you have quantum gravity, you are talking about a system where you have some geometric information, and it is governed by quantum mechanics. This means that the geometry can and should fluctuate.

Once you are here, if you have some quantum system, you might be able to talk about geometry. If you have a single variable x, it could be the volume of the universe. So if your wave function is peaked at large x, you might say that you predict a large universe.

However, in theories of emergent phenomena, in particular, emergent geometry, the geometry is not there as the natural description of your system. The geometry appears as some organizing principle that correlates your various variables in an interesting way and only in some situations. Geometry is not automatic given a wavefunction. So you need models where this can be done in a reasonable way. You need a lot of variables (geometry can have a lot of positions), and the variables have to more or less look the same (they are not completely random). Afterward, you can ask where in this mess of many variables is the geometry hiding. My favorite model for this is the AdS/CFT correspondence, but this is still too hard to be able to give a full analysis of where the geometry comes from.

Main point

Understanding emergent phenomena is hard. You need toy models. Especially so if the stuff that is emerging is the geometry of spacetime. In such situations, the spacetime is not there at the beggining. It has to be extracted from some other data. Moreover, if you change your wave function, you change your geometry.

What this means is that the lengths of features and such change depending on your wavefunction, so you not only need to be able to say that you have geometry, you need to be able to measure distances on it. And tot op it off, your geometry is fluctuating, meaning that when you measure you might get more than one answer with some probability distribution.

Ok, preamble is done. What did we do with my student?

We took a model of emergent geometry that I developed way back when (’05) that goes some way towards understanding the geometry of the AdS/CFT correspondence. The model has some nice wavefunctions that can be argued to have geometric features, and changing your wavefunction lets you change the geometry and topology of these features. This is still too general, so we picked very simple wave functions in the model, and we put them on a computer, so that we could generate probability distributions and see how the geometry dependend on the parameters of the model.

You get pictures like the one shown below:

The wave functions have many variables (6N to be precise), where N varies. All of these variables are pretty similar to each other, and get grouped into N collections of 6 variables. Each member of this collection of N is like any other, so it makes sense to compare them all by plotting the individual data in 6 dimensions. Excuse me? 6 dimensions? I don’t know how to see that. So for the picture above I projected them all into 2 out of those six dimensions.

Well, you can clearly tell that you get a disk-like geometry and that it has a hole, and it looks more or less like like a donut. Now remember that this is only one picture of a typical configuration as described by the wave function , so you can get a lot of pictures like the one above. And then you have to decide what is the radius of the hole.

Well, our procedure was designed before taking the data. We wanted to average over configurations some set of functions that tell how far are the particles from the origin. We wanted these functions to be dominated by the particles that are close to the hole of the donut, rather than the ones that are far. We also wanted these functions to be easily computable and democratic between the degrees of freedom. We decided on

So that given a configuration, you weigh the projected position on the 12 plane by how close the different variables are close to the origin. The Pi in the equation above is a projection. You want to average these over the particles, so you need to divide by N, and then you have to choose what to do with all the that you record between configurations. Finally, the idea was to take k to infinity (meaning large) to define the radius.

The ideal definition is that you take it as above, average, do some simple arithmetic, and voila you get some typical radius with some statistical distribution attached to it. However, we found that if k was large enough, the average of the above expression was infinity and that is bad. Because then it would tell you that the radius of the hole is zero.What we found was a probability distribution with some moments not defined.

So we had to work different averaging procedures that take this feature of the probability distribution into account, and each of these different ways of doing things gives different answers. Moreover, if you go to large N, the result is supposed to converge for all different measurements to the same value no matter what. But how fast does it converge depends on the procedure of getting there and what precisely one is averaging.

Our main conclusion was that how you average matters, and that our N were not really large enough to be very conclusive on some things we wanted to determine. Therefore, we need a bigger simulation. In a certain sense, this helps to understand the difficulty in various issues related to emergent wobbly geometries.

Finally, I have the Homer Simpson solution to all of this: eat the donut! Duh.

on February 5, 2010 at 3:52 pmLuboš MotlIt’s a great project for an undergrad kid (and congratulations to his or her skills)! There should be many more computer-based calculations of a similar kind, so that’s all great.

Although it would be even better to find one that can actually be done completely enough so that it works – a nontrivial agreement is obtained at the end. 😉 I suspect that the massless/massive BFSS/BMN model could actually give you more room for such experiments.

Concerning the emergent geometry, do you understand a simple point why physics on a background of a coherent gravitational wave in the bulk satisfies the equivalence principle? Why all the bulk objects (e.g. small Schwarzschild black holes) behave as normally, except with a different background geometry?

What does the coherent state correspond to in terms of the boundary operator (stress-energy tensor) that is dual to the graviton? I suspect it’s some non-local integral transformation on the boundary, or is it local?

on February 5, 2010 at 6:02 pmdberensteinHi Lubos:

Great questions. I wish I had answers.

However, since I’ve thought long about these same questions, these are some points that must be considered to argue that one has a solution:

There must be some quasi universal low energy dynamics in backgrounds that are just gravity. This means that the corresponding dual field theory configurations are special somehow. Not any random state will work. This universal dynamics has to be common to many theories.

If such a thing is there, it is plausible that small Schwarschild black holes are just a manifestation of this universal dynamics and then their properties will be the same in a lot of different setups.

Finally, there is the issue of the ‘radial direction’. No one has proved locality in the radial direction. I don’t think anyone understands it in the field theory. Arguing that this direction is like the other ones on the boundary is a deep mystery. The typical solution for the case of conformal theories is that AdS is the only space with the right symmetries, so if it is going to work, it has to be AdS. But this is a kinematcal solution to the problem, not a dynamical one. There has been no dynamical solution to this problem.

Regarding coherent states, they usually are exponentials of a basic field operator (like vertex operators in field theory).

I think it should work the same in this case.

on February 6, 2010 at 9:15 amLuboš MotlDear David, we’ve apparently visited similar corners of the realm of ideas.

Concerning the universal behavior, yes, sure. There is a universal one. More precisely, there are many of them (a landscape of them) and they share the metric tensor and its behavior.

If you’re more specific, there’s a universal low-energy limit in type IIB SUGRA in d=10, the universal local ten-dimensional gravitational physics. So if you say that locally, physics has the N=2 d=10 super Poincaré symmetry, you must be guaranteed that it is the type IIB string theory, and the local value of the dilaton and RR axion are the only additional labels that tell you what can non-equivalently happen with physics.

To make this statement provable in the N=4 SYM boundary theory, one must formulate a problem in its variables. Create a “background” (e.g. a coherent gravitational wave) as a microstate in the boundary CFT. If it is “locally in the bulk” N=2 d=10 super Poincaré invariant, it will be (locally) equivalent to type IIB string theory – i.e. the same long-distance limit – at some values of the dilaton and axion.

Can’t one actually prove such a thing? You may notice that I need to define “locality in the AdS bulk” in the CFT variables to make sense out of the previous paragraph. But this definition of locality doesn’t have to be an extremely accurate locality.

A related problem that is even tougher is to constrain the allowed form of N=2 super Poincare generators that are acceptable as a solution. After all, all infinite-dimensional Hilbert spaces are isomorphic to each other, so on each of them, you may define N=2 super Poincare generators, closing to the right algebra. They will just “completely violate” the underlying dynamical structures if you start with a completely wrong representation of the Hilbert space, e.g. the Hydrogen atom.

What does it mean to completely violate the “underlying dynamical structures”? I think that it means to be “heavily nonlocal” in some sense. So a refined statement is that if you find a microstate in the CFT, plus some N=2 d=10 generators that act on “local degrees of freedom” around some point in the bulk space (a thing to be defined) and that respect the rules of locality in some technical sense yet to be determined (they’re “not qualitatively far/different from the normal boundary Hamiltonians”), then the spectrum of all these N=2 generators – e.g. the spectrum of allowed squared masses – will coincide with the right universal type IIB spectrum for a value of dilaton/axion. (That, of course, includes your “universal” small Schwarzschild black holes, and all other things. I doubt that local patches have sufficiently accurate physics so that you can read the spectrum of arbitrarily high, exponentially dense black hole microstates at high masses, but you should surely be able to read things with some accuracy.)

Such proofs could be analogous to proofs of universal infrared limits of field theories etc. – but in a somewhat grander context (or in a gravitational translation of the QFT counterparts). At any rate, it’s unclear at this point.

I feel that such things could be doable and provable – no one has just spent enough time with these proofs of the “universality of stringy gravity”.

Also, I completely agree that the AdS-symmetry proof of the existence of the background geometry is a kinematical one, and there should be a more general dynamical (and more local) one. This is a difference of proofs that I have noticed in more trivial contexts. For example, in the light cone gauge, you can prove D=26 or D=10 by the closure of the Poincare algebra. J^{i-} must commute with J^{j-} and the second-order quantum terms must cancel (they morally know about the conformal anomaly). You know what I’m talking about, right? So this proof is analogous to the symmetry-proof of the appearance of the AdS space.

But of course, there also exist more “universal” and less “background symmetry dependent” proofs of D=26 and D=10 in string theory – e.g. in covariant string theory, using the conformal anomaly – and we want the corresponding finding in the AdS/CFT case, too. Some proof that can reconstruct the AdS x S bulk space “locally”, right? I guess we have defined the same research projects but so far failed to get the right answers. Instead of all people saying the same good questions, there should be some people who don’t say questions but instead offer the right answers. 😉

And: Do you know what are the exponentials of the actual right combinations of the boundary stress-energy tensor, those that should generate the bulk coherent gravitational waves? What does such a wave really look like in the boundary CFT variables? May I talk about boundary operators that are dual to the whole coherent wave in the bulk? This should be a straightforward (and perhaps well understood) problem, right? Do they generate just translations – like the Taylor-series “translation” does – or some nonlocal transformations? What’s special from the boundary viewpoint about these nonlocal transformations?

Best wishes

Lubos

on February 6, 2010 at 12:46 pmLuboš MotlOK, I think I found the answers to all the simple questions.

To add the coherent state (bulk gravitational wave) simply means to deform the boundary action, by multiples of the stress-energy tensor, much like in the world sheet CFT (nonlinear sigma model). The deformation only depends on 4 spacetime coordinates, not 5 in AdS5, but it’s OK because

1) the dependence on the radial coordinate is calculated by the RG flow – only on-shell waves are allowed in the bulk

2) the five-sphere harmonics of the graviton may also be added, with the BPS-protected “symmetric traceless tensor” BMN-like operators (without waves).

Now, it is pretty obvious that at any location along the boundary and any scale (= the radial coordinate), it’s still the same “N=4 gauge theory action”, just in different coordinates. One needs to calculate some RG flows, in order to find out the shape of the gravity waves far away from the boundary, but it can be done.

Next, the question is locality in the radial direction. It’s equivalent to locality in the RG scale dimension. Roughly speaking, it’s decoupling of different scales. The nontrivial fun is that for large N (colors), this decoupling becomes arbitrarily accurate.

Normally, we say that physics at 1 GeV is decoupled from 1 TeV – QCD from electroweak, and so on. But if N (colors) is large, even 1.1 GeV is decoupled from 1 GeV. It’s because all the spectra become very dense. Note that the scaling of the distances by “e” corresponds to a proper distance shift in the radial dimension by Radius (AdS) or so. So it if it is much longer than the 10D Planck scale, there must be a lot of stuff happening per one e-folding of the scale.

Now, the dense spectrum is not a sufficient condition for the locality in the radial dimension. But I think it is possible to show that it kind of automatically emerges in any theory. Just consider a theory with many states in a small interval of dimensions, and study what the Hamiltonian does with them. I feel that it must be possible to define an “effective Hamiltonian” at the scale that will only be affected by the nearby states, which means – for large N – only operators with almost identical dimensions.

So I believe that no special properties of the conformal theory etc. are needed to derive the locality in the radial dimension, except for the high density of states. This should be provable – that locality in log(mu) is automatic if the density of the spectrum is high.

Cheers

Lubos

on February 5, 2010 at 6:05 pmGiotisDavid I have a kind of naive question. Physically why I need a SUSY Yang-Mills to tell me how geometry emerges? We already know from string theory how geometry (i.e. gravitons) emerges. It emerges from the oscillations of a superstring.

on February 5, 2010 at 6:59 pmdberensteinHi Giotis:

It’s not the same. In ordinary string theory, there is a background geometry, and gravitons arise from quantization of the string: the geometry is already there and can be deformed.

IN a theory of emergent geometry, geometry is not there. It gets built up as an effective description of some type of collective phenomenon.

You don’t need SUSY YM for that. What happens is that in the cases where we think we understand that it happens, it turns out that SUSY YM played a very important role in getting there and we have not been able to see this stuff happen very well without SUSY. It is believed that some of the SUSY cancellations are important for this emergent geometry, but there might be other ways to getting the same cancellations to happen.

on February 5, 2010 at 8:06 pmGiotisOk but the prerequisite of background geometry is a kind of a technical issue, isn’t it?. What I mean is that the *whole* background geometry (the metric field) is built by the gravitons of the oscillating string. At the end of the day we need gravitons for geometry and we know how to get them.

The fact that you must have a background to formulate your theory is an issue that will be solved when you have a deeper understanding of the theory.

on February 5, 2010 at 11:01 pmdberensteinHi Giotis:

It is more than a technical issue. The reason is that we do not know a complete non-perturbative formulation of string theory (as a whole) yet. Our knowledge of string theory kind of stops at the Planck scale: we do not have a calculation that tells us exactly how black holes emit their information out, whether geometry is the right set of variables to describe the `interior of a black hole’ or not and a lot of other issues that really require us to think about the Planck scale. String theory is a great description of physics at the string scale, but that is very different from the scale where black holes start forming. If we can not address effects beyond the Planck scale, we can not solve the problems of quantum gravity.

Finding gravitons is definitely a step in the right direction and if we have them we can deform backgrounds by making coherent states of these gravitons so that we know we have a theory of gravity. We can flow between well behaved backgrounds to show that the theory is somewhat background independent in the end. But we still want to show that the theory has predictions for ultra-planckian scattering, for the big bang and for the solution of the information problem (a unitarization of the theory).

The best description of string theory as a complete theory we have is exactly in terms of the AdS/CFT correspondence, where the CFT provides the unitarization of the string theory. From the CFT point of view, AdS is a ‘derived concept’, but then you have to ask where is it hiding in the field theory (as well as the strings), and we can ask what happens when geometry breaks down. You also need to explain why the theory we get in AdS looks like Einstein gravity, with approximate locality, etc etc, and not something completely different.

on February 6, 2010 at 8:52 amLuboš MotlDear David,

your answer to Giotis is surely the canonical reflexive reaction but we might be surprised.

Indeed, the background can be deformed by the closed strings, but only by “small” amounts – of order “g_{closed}” or something like that. And it seems impossible to get the “whole” metric tensor of the background out of nothing because it’s a “nonperturbatively huge” amount of the graviton condensate. Moreover, there are probably many “qualitatively different ways” for the geometry to emerge, and we want to understand all of them.

And in the AdS/CFT case, we say that we don’t have any bulk geometry to start with. Is that really correct? Don’t we just mean that we don’t see an obvious way to get it?

Perturbative string theory resembles local field theory – we know the positions of the objects, e.g. center-of-mass coordinates of closed string excitations. In the bulk of AdS/CFT, we don’t. But maybe we’re just stupid. We know the bulk AdS geometry to the extent that we know its isometry. We don’t see the locality, at least not in the radial direction, but we know how to move objects around the AdS space. Conformal transformations etc.

The boundary CFT formulation has no bulk diffeomorphism gauge symmetry built into it. But there may exist a simple way how to formulate the CFT in a new redundant way so that the bulk diffeomorphisms are there, and the bulk locality may become manifest with it, too.

I think physicists/we should more carefully distinguish “ways of emergence” that can’t really happen in a given formalism, and those that we just “can’t see to happen” at the present time.

Best wishes

LM

on February 6, 2010 at 12:32 amPYou should mention your undergrad by name in the text!

on February 6, 2010 at 12:48 amdberensteinDear P:

You can look at the paper: there is a link in the post. Point is, I didn’t get the permission of the undergraduate to use their name for this post.

I can imagine all kinds of scenarios where putting that information directly in a blog post might cause unintended damage (a lot of unwanted e-mail for example). Hence, that was my reasoning. I can also imagine a lot of unintended good from using such a procedure, but I err on the side of caution.

on February 6, 2010 at 8:41 amLuboš MotlIt’s absolutely wise from the undergrad student to be hiding in this way. 😉

on February 6, 2010 at 11:51 amGiotisDavid I understand what you are saying. It’s just that there is a deeper interplay between the string world-sheet and the target space geometry. Even in this regime the requirement for conformal invariance at quantum level on the way the string couples to the background geometry, gives the vacuum field equations for the target space. So my understanding is that the world-sheet point of view is more fundamental than the notion of geometry which should be a derived concept. When I said that it is kind of a technical problem I meant exactly that i.e. that with the current formulation the geometry is a prerequisite of the world-sheet point of view. In that respect it seems to me that the right degrees of freedom have not been identified.

But as I said, I understand your posistion that all these (strings, geometry etc) are derived concepts of CFT.

on February 6, 2010 at 1:28 pmLuboš MotlLook where the locality comes from.

Locality really comes from the analyticity in the momentum. If you imagine a wave function of anything as a function of the momentum – let’s take discrete momentum now – the position is given by the change of the phase as the function of the momentum.

The time translations generated by the Hamiltonian are changing the phase differently for different momenta, too. But the dependence of the energy on the momenta is always bounded, so you get a speed limit: the speed – change of “x” with time (recall that “x” is the change of phase per momentum) – is always smaller than a certain bound, and it gets identified with the speed of light.

Also, the operators or other objects associated with different positions in “x” don’t mix with each other. They have an (approximately) vanishing supercommutator, and their commutators with the Hamiltonian keeps them in the same sector (same location in “x”). All these things can be seen by a simple summation of phases – that cancels unless the phase is zero, as long as all the other factors depend continuously on the momenta.

I have always been confused by this point but it makes more sense now. Consider e.g. winding numbers of a string. T-duality converts windings to momenta, which are dual to a new toroidal dimension. Why is there a locality in the new toroidal “x” complementary to the winding number? Of course, T-duality tells you why – you convert it to the case of the ordinary momentum.

But could you do it without an exact T-duality (analogous to the “dynamical proofs” instead of the special “kinematical proofs”)? I think that the answer is Yes.

So I believe that whenever you generate many states/operators that are locally labeled by some integer labels with many possible values (high density of states in this variable), the integer labels (within a small patch of these states, which still contains many of them) may be interpreted as momenta dual to some emergent coordinates, and the locality in these new emergent coordinates automatically follows.

Of course, one needs to study more accurately what kind of dependence of the Hamiltonian on these charges/momenta are allowed, and why the real systems satisfy these conditions, but I think that the basic framework of the proof may be robust.

Cheers, LM

on February 6, 2010 at 2:02 pmLuboš MotlI see: the subluminal condition is probably actually the weak gravity conjecture!

http://arxiv.org/abs/hep-th/0601001

I was always bothered by our need to use the lightest state. Who cares about some f* lightest state. It’s an accident how the lightest state looks like, right?

So the key statement in an updated weak gravity conjecture should not talk about the mass of the lightest state but about the mass differences among states with high values of the charges/labels. This mass difference shouldn’t exceed “g” times “m Planck”, with some correct numerical coefficients.

This can also be generalized to a higher number of charge species, by defining the metric tensor on the space of charges, which is inverse to the metric on the emergent space. So the dimension emerging from any kind of U(1)-like charge – or any other labels that locally look like U(1) charges – always automatically satisfies some speed limitation by the speed of light.

Of course, the original weak gravity conjecture is for a 4D gravitational theory, and the formula uses “m Planck”, so it has to be redone a little bit. But it seems reasonable that the focus on the lightest state was a mistake, and one should have looked at the asymptotic mass spacings between highly, generically charged states, and the “weak gravity” inequality actually implies that the Hamiltonian doesn’t like to change much as a function of these charges. This is true for normal theories where the maximum charge/mass ratio is obtained for BPS saturated objects. Other types of charges, including non-BPS contributions, have a lower charge/mass ratio – but higher charge-increase/mass-increase ratio (i.e. lower mass-increase/charge-increase ratio).

on February 6, 2010 at 8:56 pmdberensteinYou have given me a lot to think about.

Locality is a tricky one: for example, if one does Non Commutative gauge theory, the dispersion relations are the same as in a Lorentz Invariant theory. However, the higher correlations are not.

In these theories, locality breaks down at the Non Commutativity scale.

I’m not sure if this is an artifact, or wether one needs to think about this more carefully.

on February 6, 2010 at 9:09 pmLuboš MotlGood point, agreed, of course.

When looking at correlators similar to the noncommutative field theories, it’s probably healthy to use the momentum (not coordinate) modes/representation, at least in the directions that arise “discretely” from the CFT, such as the S^5 coordinates (and maybe the radial one in AdS).

I was only trying to prove some approximate locality in the large N limit.

The nonlocality “scale” in noncommutative field theories (derived from strings in B-fields) is always bounded by the (closed) string area from above, isn’t it? The “unit of area” in the noncommutative x1-x2 phase space is at most alpha’, up to numerical constants, right? I was assuming that there may be nonlocality over a string scale, anyway – which is N_{colors}^{-1/4} times the AdS5 radius, is that correct? So this nonlocality relatively goes to zero in the large N limit, and the noncommutative nonlocality is just a contribution of the same order which I included under the same umbrella.

I know that without a B-field, there is in some sense a bigger locality, even at sub-stringy scales – for the open string massless modes – but it’s kind of an artifact of omitting the massive tower, anyway. And in some broader sense, there is a “full” locality as long as one appreciates the extended character of strings (and/or other basic objects): strings interactions are local on the worldsheet which, by continuity, means local in spacetime. But they’re not local relatively to the center-of-mass coordinates of the string, and the “local on the worldsheet argument” goes beyond a “field theory description” in spacetime and depends on the basic objects you are ready to take, so it wasn’t meant to be so.

So the aim is not to prove an exact locality, just an approximate one, and in fact, I think the exact locality is not really true for any field-theory-like bulk description – or at least not easily provable by simple methods. The string length in the bulk should be chosen as the first scale where nonlocalities appear – it is already nontrivial to see why the locality is OK at distances much longer than l_{string} but much shorter than R_{AdS}.

on February 8, 2010 at 8:13 pmdberensteinI’m confused.

Usually, the noncommutative field theory limit is the limit of very large B field. So that the `open strings’ are made large in string units, even for momenta that are smaller than the string scale (a decoupling limit).

Regarding the other comments, I fully agree with them.

on February 8, 2010 at 4:59 pmUncle AlLuboš, “physics on a background of a coherent gravitational wave in the bulk satisfies the equivalence principle”45 degrees latitude, FT microwave spectrometer, vacuum phase molecular propeller dimers: right-right, right-left, and left-left with a rigid connection,

The four bridgeheads are homochiral centers related by three orthgonal C_2 rotation axes. They are degenerate, point group D_2. Twistane is all identical twist-boat cyclohexane rings that are homochiral atomic mass distributions thenselves.

The Earth rotates about its axis as it orbits the sun. If the microwave rotation spectrum shows divergent spin populations vs. chirality with a 24-hr cycle, the Equivalence Principle is falsified for opposite parity mass distributions re

pdf pp. 25-27, calculation of the handed case.

I do no doubt your derivations and their consequences, Luboš. I challence your founding postulate with a trivial experiment. If observation falsifies your theory, said theory is non-physical. Somebody should look.

on February 8, 2010 at 8:50 pmLuboš MotlDear Uncle Al, I think you’re right that this is one way to check violations of the equivalence principle. And let me bet 99-to-1 that your experiment won’t see any violations. Good luck with the funding, LM

on February 8, 2010 at 9:14 pmLuboš MotlDavid:

“I’m confused.

Usually, the noncommutative field theory limit is the limit of very large B field. So that the `open strings’ are made large in string units, even for momenta that are smaller than the string scale (a decoupling limit).

Regarding the other comments, I fully agree with them.”

It depends what units you choose. In the normal closed string alpha’ units, the distances “sqrt(theta)” where you see the noncommutativity are stringy. They’re large in the open string units, but for a large B, the open string sqrt(alpha’) becomes much shorter than the closed sqrt(alpha’). Unless I am wrong, of course.

To recall some of the formulae, go e.g. to page 8 (9/100) of Seiberg Witten 1999:

Note that the bulk of the world sheet is governed by correlators of little “g” that encodes the distances in the normal closed string units (“g” is “closed string metric”). However, the effective metric useful for the open strings if given by “G” (the “open string metric”), and G_{ij} has much bigger components than g_{ij} in the large B limit (by a factor of “B”, because the second B.B term dominates). So it means that the normal distances comparable to sqrt(alpha’) if measured by the closed string metric – those from closed strings that do not care about the B-field at all – look like infinitely long (B times longer) distances if measured with the metric tensor G_{ij} relevant for the open strings.

The scaling limit is large B, but you also focus on distances that are shorter than sqrt(alpha’).

As you can see on page 12 (13 of 100) of the paper, “theta” is actually “1/B”, and it enters the boundary correlators of “x” . So large “B” actually means “small theta” in this limit, and the noncommutative reach is small.

Or look at equation 2.6 on page 8 (9 of 100), normalized in the original closed string metric. The first term, G^{ij}, goes like 1/B^2 for large B, while the second (theta) term goes like 1/B for large B. It means that you may define new “B times x” (that’s finite if you focus on short distances closed Lstring over B). But then the second term, the commutator of these “B times x”, becomes B in the numerator. But that’s still substringy (or stringy?) in the closed string metric.

There might be subtleties but I think that the noncommutative phase cells never becomes (much) bigger than the closed string’s Lstring. I am just not sure whether it is exactly Lstring (closed), or sqrt(B) times shorter a distance scale.

Cheers

LM

on February 8, 2010 at 9:21 pmLuboš MotlI got kind of sure that the phase cell “radius” sqrt(theta’) has physical length that it sqrt(B) times shorter than the closed string L_{string}, but sqrt(B) times longer than the open string L_{string}. It is the geometric mean.

So it looks infinitely long in the open string frame, appropriate for the nice formulation of the noncommutative theory, but this nonlocality distance scale looks infinitely short in the closed string units – of the closed string spacetime where you embedded the brane.

Of course, I was always working with the latter because the locality and causality is always related to the closed strings – which determine the metric tensor. The speed of the actual (open string) light is sqrt(B) times slower than the speed of gravity – the latter is remaining the maximum speed. I think it comes from the rescaling of the noncommuting coordinates, too.

There could be new issues with the space-time noncommutativity, a partially timelike arrangement for B_{i0}. This was always tough. This corresponds to the electric field on a brane. When too high, you start to Schwinger product the open strings because the tension is beaten by the electrostatic energy of the created dipoles – the brane becomes unstable.

Near this critical field, it might be that the open strings are actually getting longer – but this is not the space-space noncommutativity both of us were taking about, and this space-time noncommutativity is much more problematic because it introduces nonlocalities in time which destroy the Hamiltonian way of defining dynamics (which only requires 1st time derivatives and locality in time).

on February 12, 2010 at 8:08 pmPlatoInteresting.

From Witten’s thoughts below….

So one sees where the link has been made between these two branches in terms of emergence?

It is of interest to me how your work is held in context of the work of Mandelstam and the genus figures? A simple concept in general developed in the valleys?

Best,

on February 12, 2010 at 8:53 pmdberensteinPlato:

Emergence is a technical concept. Something is emergent if the behavior of a collective system is different from the behavior of its parts.

Basically, things like hydrodynamics are an emergent phenomenon: you can not describe hydrodynamics for a single particle.

These are also called collective phenomena sometimes.

Spacetime being emergent means that if sufficiently many parts of a dynamical system are arranged in a particular way due to some internal dynamics, you get behavior that is sufficiently similar to what we call spacetime with stuff on it.

We use the same words as referring to the same technical concept. That does not mean that we are solving the problems that condensed matter theorists want solved.

Now about the Mandelstam comment, I don’t know what you are referring to.

on February 13, 2010 at 7:21 pmPlatoHi David,

No I think I understand this. It was more the idea that one could have considered strings themselves as “constituent figures of the system” and from that “an emergence.”

Understanding of course that we place these concepts in expressions of the universe in terms of microseconds and not minutes. This help me to recognize an earlier time frame? Is this a wrong perspective to hold?

(

bold added for emphasis)In regards to Mandelstam and genus figures

Just trying to understand how your work is related to genus development, if at all.

Thanks,