Monday 24 November 2008

"Almost the square root of 2" - rounding errors in computer code

[caption id="" align="alignleft" width="300" caption="Photo by Waka Jawaka"]Photo by Waka Jawaka[/caption]

Computers are great for handling numbers and doing large amounts of operations on them.  They can repeat the same operation over and over again and they will do the same thing every time (unlike a human, who will sometimes do the wrong thing by mistake).  This is great, but that doesn't mean that computed operations are free from error, even when they're free from bugs and implementing exactly the algorithm that you intended.

In this article, we're going to consider rounding errors in computer code.  Specifically, when your code needs to represent real, non-integer values (eg. floating point representations).  These do not have exact representations in a computer and so you need to take care that the resulting rounding errors are not significant in your results.

Digital representations of numbers
Computers represent numbers using strings of bits.  This means that for any given type of number (integer, floating point) there are a finite number of possible values which that number can take.  For integers, this means there will be a (large) maximum value and, for signed integers, an equivalently large negative minimum value.  Inside the range, the integers are exact.  Outside of the range is simply called infinity (which actually means "really big", in this context).


For non-integer (ie. real) values, however, this doesn't work.  There are an infinite number of real values even between zero and one, so representing them all isn't possible.  A typical solution is to define a large number of values between 0 and 1, along with a power of ten (so that, for example, 2.34 * 10^57 could be represented) and a bit signifying positive or negative.  This allows real values to cover a large range of values while still only occupying a small number (eg. 16 or 32) bits.  But this means any actual real value must be assigned to the nearest represented value.

The problem therefore becomes one of limited precision.  Consider a simple fraction, such as 1/3.  Written in this way, it's exact (no information is lost; there is no rounding error).  However, a floating point representation of the same might be 0.333333, which is close but very definitely not identical.  The question becomes can you safely ignore the rounding error?

Keeping the errors small
In the above example, the error could probably reasonably be called small.  If you're calculating the temperature at which to cook your dinner for example (183.3333 degrees centigrade), such an error won't matter.  But if you're doing a large sequence of calculations where each step relies on the answer from the previous one, the errors may stack up over time.  Or perhaps you're finding the difference between two huge values that are almost identical.  For example:


The cost of equipment for your experiment might be computed as:

(0.33334 - 0.33333)*1e9 = 1e4

But note that your result would double if the second number was 0.33332!

So you can see that if your calculation is sensitive to that nth significant figure, the "small" rounding error might not be insignificant!  So, how do you avoid this?  Firstly, try to avoid putting your code in the above position, perhaps by rearranging your maths.


For example:

If above, our input values are actually  A = 1/3*1e9 + 1e4; B = 1/3 * 1e9, it is then obvious that we should cancel the terms containing 1/3 and we are left with 1e4.

What we've done here is manage to deal with the large numbers so that they don't swamp the smaller number.  In general try and find exact parts of the calculation and do them analytically so that your code can just use the exact answer (a great example of this is integration; don't do it numerically if you can solve it exactly!).

Converting from reals to integers

One particular issue can come when converting reals to integers, because of the rounding effect.  For example, you might have an algorithm that iterates in intervals of 0.1 and then uses a modulus function to remove any whole number, leaving only the fractional part.  For example:

currentValue   = 1.9
currentValue += 0.1
outputValue    = mod(currentValue, 1)


So, the outputValue should return 0 (because mod(2,1)=0).  However, if we get  rounding error from working with reals, we might have:

currentValue = 1.99999999

and mod(1.99999999, 1) = 1 (because 0.99999999 rounds up after the modulus), which is clearly wrong.

The issue here is that a small rounding error occurs at a critical place (ie. there is a huge difference between 1.99999999 and 2.0000000 because of rounding).  A better solution here might be to work exclusively with integers until the very last step:

currentValue   = 19
currentValue += 1
outputValue    = mod(currentValue, 10) / 10


In conclusion
Often, the rounding errors intrinsic to real numbers (as represented in a computer) are so small as to be negligible.  However, there are certain cases where this rounding can significantly affect the results you get.  Be aware that this can happen and try to avoid situations where it might be a problem.   And remember, sqrt(2) * sqrt(2) isn't ==2 in computer code! (although it is close!).

No comments:

Post a Comment