Re: How to display a "double" in all its precision???
jmcgill wrote:
Patricia Shanahan wrote:
jmcgill wrote:
Patricia Shanahan wrote:
Once you get into actual arithmetic, everything gets more complicated.
Is it at least correct to claim that 53 bits of binary precision
guarantees no less than 15 decimal digits? Even this, no doubt,
depends on the exponent.
I'm not sure I see how the exponent affects it that much, as long as you
are thinking of significant digits.
I suspected that larger exponents lead to more granularity in the ranges
or something like that.
I think you need to distinguish more between absolute and relative effects.
For example, the absolute difference x-y between two consecutive
representable numbers, x and y, is a strictly increasing function of the
exponent. The relative difference (x-y)/x varies within a given value of
the exponent, but does not increase with exponent.
Also, when I posted to the thread I somehow thought I was posting on a C
group, not java. I realize java programmers do not generally deal with
the bitwise evaluation of data.
I'm neither a Java programmer nor a C programmer. I'm a programmer who
happens be using Java right now. I may ignore some details when working
in high level languages, but that does not mean I understand them any
less than when I'm working in assembly language.
I'm wondering all this because in other disciplines, "error bounds" is
always such an early, and often repeated, focus. Yet I had never seen
nor been asked for error bounds in IEEE numeric representations.
There is a required accuracy for all the basic operations, specified in
the IEEE 754 standard, although some implementations confuse matters by
keeping intermediate results with more accuracy. That tends to reduce
the need for discussion.
A good library specification should discuss error bounds for those
functions whose implementation is allowed some slack. See, for example,
the Java API documentation for sin:
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Math.html#sin(double)
"The computed result must be within 1 ulp of the exact result."
("ulp" is short for Unit Least Place, a difference of one in the least
significant bit of the fraction. See the top of the referenced page for
a more detailed explanation.)
Beyond the basics, the subject rapidly gets very complex, see
"numerical analysis" in any good technical bookstore or library.
Floating point application accuracy is a difficult, but intensely
studied, subject.
Patricia