Archive for August, 2009
Scientists are these mythical creatures that live somewhere between the clouds and the stratosphere. Their working habits include using lab gowns, pocket protectors and calculators. Many of them (some would say most) are unkempt and dress poorly, except the ones that don’t. The scientists speak their own language, which is filled with long words and an adherence to strict meanings for words. When trying to communicate to the rest of the people, they describe their work in overly complicated technical terms. This inspires fear, which is exacerbated by the natural fear of the unknown. The typical stereotype of scientists is that of a mad scientist. This is apparent in movies and television dramas. Mostly, it shows scientists as being unconcerned with the day to day stuff that makes most of humanity tick. Most importantly, the attitude that is portrayed is one of arrogance. Scientists are described as condescending `know it all’ SOB’s who can’t bother to explain their work to people that don’t understand it.
At least in recent years it seems that some kinds of science are getting more respect (forensic science for example), and one can find random clippings on the web on how that has affected the real life work of the individuals who practice it. The reality is closer to an non-scientific community.
So now we enter the theme of the day. How can we overcome the above hurdle for communicating science to the public?
Well, I’m buried up to my head in work trying to finish up a paper. I have been doing this for a while. A project that should have ended with a 20 page paper ballooned on me, and now I am writing a paper on the edge of 50 pages.
Which brings me to the title of the post. Fifty pages is a long paper. There is an additional problem in writing such a long paper. This is that by the time I’m writing the end, I’ve somewhat forgotten what I’ve written before. Mre specifically, not so much the content, but the way in which it was written. And even more specifically, what precise notation was used.
For example, just to give you a feel, was it or , or was it ? Or was it , rather than .
I’m writing stream of consciousness, so it can be fixed later when I’m combing through the paper. Moreover, the longer the paper, the more symbols and fonts one needs, and they can start overlapping. This means that notation degenerates as I’m writing a paper. It’s not consistent for more than I can work on in one day. About five pages that is. So that is what I will call the correlation length of notation: the amount of pages that one writes before the notation mutates and starts getting disordered.
So, on a 50 page paper, there are ten correlation lengths of notation, so the end looks nothing like the beginning in terms of notation. This gets worse with more authors. And don’t even get me started on writing books. To fix this, one has to ‘cool down the system’ so that it becomes ordered (I’m making an analogy with ferromagnetism here). This requires time and many passes. So a paper of length seems to take of order of time to write it down. Maybe that critical exponent is different.
There is always a plan B: give a guide to notation changes so that the work is piled on the reader. What do you think of this strategy? This seems to be the way for books, because there are many conventions that overlap: they come from different developments by different authors. Fortunately our brains seem to be able to read contextually, and can be energy, and electric field and can be the electric charge and the Euler constant all of them in the same equation, when it becomes obvious how to interpret it. Isn’t it?
So there was this article in the New York Times today, about successful attacks to credit card databases. In particular about a ring that is responsible for stealing 130 million credit card numbers. I wonder if it is these guys who are responsible for my credit card sending me a new car5d number recently, and causing a lot of hassle with me trying to fix all the automated bill payment I have. In any case, in the article it explains that credit card companies don’t encrypt the credit card information when they are communicating with the machines.
What? How can that be?
Does technology really lag so much?
In the US alone, I would expect that there are more than a few billion dollars lost each year on credit card fraud and stolen identities. I would also assume that there are less than 100 million credit card reading machines and ATM’s (1 per about 5 people). The cost of replacing these machines so that they encrypt the data should be less than 100 per machine. These machines are not as expensive as computers, and many of them could get a quick fix with a software update. This is doable within a few years (two years should be enough for even the most backward shop to get a new machine).
This puts the cost of encrypting the system at about 10-20 Billion (in the US). There are standard encryption technologies and commercial protocols that are well tested and secure. How hard can it be to implement, seriously?
The rest of the world would follow soon. If credit card fraud is bad in the US, as far as I konw it is even worse in Europe. I expect this cost of encrypting to be much less in the long term than the cost of issuing new credit card numbers to all of their costumers, plus picking up the bills on credit card fraud would entail.
There’s not much on my computers, but I’m extremely paranoid about logging in and out of them. I don’t trust internet cafe computers: there is key-logging software in a lot of them and I only use ssh-encrypted channels to do various things. So how can people who care about money be so careless?
The easiest way out of this is to require the government to mandate cryptographic standards for communicating between machines and banks, so that even if people get access to the data stream, there is very little information that can be pulled out of it. Seeing as how banks have lost a lot of clout recently, this is probably a good time to implement this.
In the meantime, here is something from XKCD regarding man in the middle attacks that should make you all laugh for a while.
Well, I’m wondering, is this cool? or does this just date me?
In case you’re wondering about dates, I found this one the other day.
Sometimes you work hard on a project, and you have some theoretical framework that explains how things should behave in certain limits and how there should be a natural expansion to be able to do a fit for the limit one is studying. Then sometimes you take some data, let us say you do a simulation, and the data just does not conform to these expectations.
You then get stuck with data that you don’t really know how to analyze. And it is terribly frustrating.
You can try to understand what direction the data is pointing to, but it can be more of a ‘nebulous oracle’ than a clear straight arrow pointing to the path you should take. This is normal: it happens all the time.
Think about that.
The whole idea was to get acquainted with basics of python. So I wanted to generate some random data, use some conditionals, use one external package (in this case random), to have at least one loop, get input from the keyboard, print out the data, both on screen and to a file and format the data on the file for more ease of reading.
I was purposefully sloppy with not verifying the data, nor converting it to float values, to see how it would read it. It guessed correctly with the input I gave it. The file is controlled by fout that opens the file ‘data.txt’ for output. It is an object that lets you write to it. It’s really not a bad first program, for a beginner MCMC code. Notice that there are no type declarations. These would have made the equivalent c code longer.
s2= input(“Input bias: a number between 0 and 1”)
digits1= input(“How many digits?”)
if s<0.5: x=0 else: x=1 fout=open("data.txt", 'w') j=0 while j