Feeds:
Posts
Comments

Archive for November, 2010

I have written many papers in the last 20 years. Recently, I was taking a trip down memory lane and was reading some of my older stuff. I happened on a pretty identity on one of my papers. Something I had forgotten. That particular identity probably has someone elses name attached to it. I don’t believe that I was the first person who discovered this algebraic identity, but I wouldn’t know where to start looking for the correct precedence.  If the identity has a name attached to it, I wouldn’t be surprised if the culprit for finding it first is pushing daisies.

Mathematics keeps on getting rediscovered after all.

 

The identity is as follows.

Consider a set of of s numbers (they can be real, rational, algebraic, or even more messy commutative algebra objects so long as for the most part their multiplicative inverses are well defined). Let us call this set

S= \{ \alpha_1, \dots , \alpha_s\}

And consider the set of permutations of the first s integers, where \sigma is one such permutation

\sigma\in Perm\{1, \dots, s\}

We are then instructed to take the sum

{\sum_{\sigma\in Perm\{1, \dots, s\}} } {\frac 1{(\alpha_{\sigma(1)}+\alpha_{\sigma(2)}+\dots +\alpha_{\sigma(s)}) }}{\frac 1{(\alpha_{\sigma(2)}+\dots+\alpha_{\sigma(s)})}}\dots {\frac 1{\alpha_{\sigma(s)}}}

The stipulation is that the sum has no infinities (the numbers are generic).

This sum is equal to

{\frac 1{\alpha_1 \cdot \alpha_2 \dots \alpha_s}}

As everyone can see, it’s an  obvious identity so the proof is left to the reader ;-)

For some reason, looking at it I can imagine it appearing miraculously in the twistor formulation of  Yang Mills scattering amplitudes… well, this is just a random speculation.

 

 

Read Full Post »

Investing in (basic) science is like playing the lottery. If you buy just one ticket, you will probably lose your investment. But if you buy all the available tickets, you can win big. The NYT has a piece reminding readers of this fact. Often the problem of most states/nations is that they can not afford to buy all the tickets, not even close to a fraction of the tickets. So instead they end up having to decide what looks most promising. This is where it gets tricky.

I think that is enough philosophizing for today. I better get back to work.

Read Full Post »

Many times in physics one wants to solve systems of ordinary second order differential equations (equations of motion for example). If the dynamics comes from a Lagrangian,  it is standard to try to put them into first order formalism by going to the Hamiltonian formalism and working in “phase space”. Once you get to this stage, hyou can try putting the system on a computer by evolving the equations of motion discretely. Many times this destroys certain aspects of the dynamics. However, if you do things right, you can get some things to work better then expected.

For example, in the Hamiltonian formalism of the Kepler problem, one would have a Hamiltonian of the form

H= \frac {p_1^2 + p_2^2}{2m} - \frac{K}{r}

where

r= \sqrt{x_1^2+x_2^2}

The sign indicates that one has smaller energy where $r$ gets smaller (the potential energy is attractive).

A naive implementation of the evolution of the system is given by evolving

p_i [t+\delta t ] = p_i[t] - \partial_{x_i} V[r[t]] \delta t

and

r_i[t+\delta t]= r_i[t]+ \frac{p_i}{m} \delta t

However, after staring at this for a while, one notices that the dynamics is not reversible: both x,p have changed, so going back by changing \delta t\to -\delta t does not get you exactly back to where you started.

There is a very nice fix to this problem: you think of momenta as being evaluated at half times, and positions at full times. This is, we get

p_i[t+\delta t/2] = p_i[t-\delta t/2]+ \partial_{x_i} V(r[t]) \delta t

and

x_i[t+\delta t]= x_i[t]+ p_i[t+\delta t/2] \delta t

and even though this looks almost identical to what we had before, it is now time reversible (just send $\delta t\to -\delta t$ and do appropriate shifts to check that you really get back to where you started).

This is called the leap-frog algorithm. For problems like the one above, it has rather nice properties. The most important one is that it preserves Liouville’s theorem (it keeps the volume element of phase space constant).

In examples like the one above, it does something else that is quite amazing. If you remember Kepler’s second law (sweeping equal areas in equal time intervals), it is the law of angular momentum conservation. I’ll leave it to you to find a proof that the above system sweeps equal areas in equal times around the origin x_{1,2}= 0. I learned this fact recently in a conversation in my office and I was quite pleased with it, so I thought it would be nice to share it.

This algorithm does quite well on a lot of other systems (like the one I’m studying now for my research). If you have a system with a lot of symmetries, sometimes the leapfrog algorithm will preserve a lot of these symmetries and also the conserved quantities, so that you can evolve the system for much larger values of \delta t without loss of information.

 

 

 

 

 

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 42 other followers