Re: When to use float (in teaching)
Thomas Pornin wrote:
According to Eric Sosman <esosman@ieee-dot-org.invalid>:
The bubble may have been prodded, but exhibits some
resilience and a resistance to bursting. For what it's
worth, my system[*] takes 5.88 seconds to do 200 naive
Gaussian eliminations on a 250x250 double matrix, 4.86
seconds to do the same on a float matrix. That's a 21%
speed penalty for double, which seems more than "modest."
[*] 3 GHz Pentium 4, Windows XP SP3, Java 1.6.0_13.
That's why I wrote "in-CPU processing". Your matrix uses about
250 KB with floats, 500 KB with doubles. Your CPU has only 32 KB
of L1 cache for data, so what you are seeing is a cache effect.
For _storage_ (as opposed to _computations_), you should use
floats if their precision is sufficient for your data.
If we could get a useful "CPU-only" performance measure,
the distinction might be of practical value. But as long as
the CPU must acquire operands from and deliver results to RAM,
the CPU-plus-RAM system cannot run faster than the RAM will.
To put it another way, it doesn't matter that the CPU knows
the answer "now" if it can't reveal it until "later."
Nonetheless, when you perform the multiplications and subtractions that
Gaussian reduction entails, you should use doubles (i.e. you read floats
from the matrix, cast them to doubles, perform the computation, then
cast back the result as float and store it back in the matrix).
It would take an analysis beyond my feeble powers to say
for sure when it is or is not better to perform intermediate
calculations in higher precision. Keep in mind that rounding
to double and then rounding to float may give a different
result than rounding to float immediately. (Analogy: Rounding
4.047 to one decimal place gives 4.0, but rounding it first to
two places -- 4.05 -- leads to a final result of 4.1.)
--
Eric Sosman
esosman@ieee-dot-org.invalid