It is well know how accurate Moore’s Law is (’s_law). Published by Intel co-founder Gordon E. Moore, it predicted and still predicts (for at least the next 10 to 20 years) the number of transistors that can be placed on an integrated circuit. As Moore stated, this number has been increasing exponentially and doubling about every two years. For people not familiar with the field, a guy predicted almost half a century ago a very annoying 21st century fact: if you buy a computer in 6 months there’s going to be a better one for the same price and if you buy a camera in 6 months it will be in the sales aisle in Fry’s.


As the size of transistors decreases, new physical characteristics must be taken into consideration for the device to be accurately characterized. One of the new terms in the equation recently discovered is a certain type of noise that very small transistors produce, the so called random telegraph noise.

Noise has to be cancelled or minimized and, to do so, one has to find a way to model it. I already mentioned the most standard model of noise, AWGN, and there is, or used to be, a model for this new kind of noise. Engineers at the U.S. National Institute of Standards and Technology seem to have discovered that the theory explaining the origin of this noise is wrong. Actually, and quoting IEEE’s Spectrum magazine, it is not just wrong but totally wrong ( The main problem is that “if you don’t know where it’s coming from, you don’t know how to fix it”. This might be the reason why Michael Keaton is obsessed in finding the origin of some messages hidden in white noise in a decent 2005 horror movie.

This signal, already becoming a challenging problem for flash and static RAM memory, might present a threat to future low-power logic circuits as their dimensions keep shrinking as predicted by Moore’s law. The good news are that maybe, in a few years, buying a new computer or digital camera will be a long term investment.