At some point I promised that I was going to write about my most recent paper. So here is my promotion. In a sense, that paper is an exercise to understand what does it mean to have quantum gravity in a setup of emergent geometry: this is a situation where geometry is not there a priori, but it is extracted from some collective behavior of a system. I don’t want to go into semantics of what emergence means. For our purposes it is something that is extracted from a non-trivial procedure in systems with a lot of degrees of freedom, where we extract stuff that involves all degrees of freedom simultaneously in a non-trivial way. The system is quantum mechanical, and therefore there are quantum fluctuations and whatnot, and the whole purpose of our study is to measure some property that can be associated with a distance, but taking into account that the measurement will give you some type of probability distribution on some variable that is supposed to be geometric. Instead of getting something where this is all done analytically, we did it by computer simulations and ran it like an experiment. To top it off, the research was done with an undergraduate student, who ran the simulations and did some of the basic data analysis of the numbers we got.
Here is the longer version.
I’m using the word quantum in the sense of quantum mechanics. This means that we need a Hilbert space of states, and in this Hilbert space of states we get particular states. In the case of quantum gravity, for each quantum state you get one universe so to speak. The states can be superposed, you can have dead/alive cats, there can be interference and in the end you compute probabilities, or probability distributions.
Most interesting Hilbert spaces are infinite dimensional, and they all look the same to the untrained eye. Heck, even to the trained eye they look the same. Thus, one can not just do random quantum mechanics with a random Hilbert space and see what happens. You need extra structure that lets you make meanigful statement about these states. This usually shows up into having a preferred set of variables (let us call them x for lack of a better name) and then one can write a wavefunction of these variables, usually denoted by
A wave function defines a state so long as it satisfies some basic properties, and given a state, one can measure various things that depend on x (remember that x is a set of variables).
The second point to make is what does gravity mean above? Well, the modern understanding of gravity is based on the geometry of spacetime being curved. So if you have quantum gravity, you are talking about a system where you have some geometric information, and it is governed by quantum mechanics. This means that the geometry can and should fluctuate.
Once you are here, if you have some quantum system, you might be able to talk about geometry. If you have a single variable x, it could be the volume of the universe. So if your wave function is peaked at large x, you might say that you predict a large universe.
However, in theories of emergent phenomena, in particular, emergent geometry, the geometry is not there as the natural description of your system. The geometry appears as some organizing principle that correlates your various variables in an interesting way and only in some situations. Geometry is not automatic given a wavefunction. So you need models where this can be done in a reasonable way. You need a lot of variables (geometry can have a lot of positions), and the variables have to more or less look the same (they are not completely random). Afterward, you can ask where in this mess of many variables is the geometry hiding. My favorite model for this is the AdS/CFT correspondence, but this is still too hard to be able to give a full analysis of where the geometry comes from.
Understanding emergent phenomena is hard. You need toy models. Especially so if the stuff that is emerging is the geometry of spacetime. In such situations, the spacetime is not there at the beggining. It has to be extracted from some other data. Moreover, if you change your wave function, you change your geometry.
What this means is that the lengths of features and such change depending on your wavefunction, so you not only need to be able to say that you have geometry, you need to be able to measure distances on it. And tot op it off, your geometry is fluctuating, meaning that when you measure you might get more than one answer with some probability distribution.
Ok, preamble is done. What did we do with my student?
We took a model of emergent geometry that I developed way back when (’05) that goes some way towards understanding the geometry of the AdS/CFT correspondence. The model has some nice wavefunctions that can be argued to have geometric features, and changing your wavefunction lets you change the geometry and topology of these features. This is still too general, so we picked very simple wave functions in the model, and we put them on a computer, so that we could generate probability distributions and see how the geometry dependend on the parameters of the model.
You get pictures like the one shown below:
The wave functions have many variables (6N to be precise), where N varies. All of these variables are pretty similar to each other, and get grouped into N collections of 6 variables. Each member of this collection of N is like any other, so it makes sense to compare them all by plotting the individual data in 6 dimensions. Excuse me? 6 dimensions? I don’t know how to see that. So for the picture above I projected them all into 2 out of those six dimensions.
Well, you can clearly tell that you get a disk-like geometry and that it has a hole, and it looks more or less like like a donut. Now remember that this is only one picture of a typical configuration as described by the wave function , so you can get a lot of pictures like the one above. And then you have to decide what is the radius of the hole.
Well, our procedure was designed before taking the data. We wanted to average over configurations some set of functions that tell how far are the particles from the origin. We wanted these functions to be dominated by the particles that are close to the hole of the donut, rather than the ones that are far. We also wanted these functions to be easily computable and democratic between the degrees of freedom. We decided on
So that given a configuration, you weigh the projected position on the 12 plane by how close the different variables are close to the origin. The Pi in the equation above is a projection. You want to average these over the particles, so you need to divide by N, and then you have to choose what to do with all the that you record between configurations. Finally, the idea was to take k to infinity (meaning large) to define the radius.
The ideal definition is that you take it as above, average, do some simple arithmetic, and voila you get some typical radius with some statistical distribution attached to it. However, we found that if k was large enough, the average of the above expression was infinity and that is bad. Because then it would tell you that the radius of the hole is zero.What we found was a probability distribution with some moments not defined.
So we had to work different averaging procedures that take this feature of the probability distribution into account, and each of these different ways of doing things gives different answers. Moreover, if you go to large N, the result is supposed to converge for all different measurements to the same value no matter what. But how fast does it converge depends on the procedure of getting there and what precisely one is averaging.
Our main conclusion was that how you average matters, and that our N were not really large enough to be very conclusive on some things we wanted to determine. Therefore, we need a bigger simulation. In a certain sense, this helps to understand the difficulty in various issues related to emergent wobbly geometries.
Finally, I have the Homer Simpson solution to all of this: eat the donut! Duh.