Re: Test for NaNs?
* Dave Harris:
alfps@start.no (Alf P. Steinbach) wrote (abridged):
The user knowingly asks for optimizations that mean the program is
no longer conforming to the IEEE 754 standard, and the
documentation states this clearly. That is all OK. In that respect
one gets exactly what one asks for.
The way I see it, the compiler flag means that the user asserts that
their code does not use NaNs. If this assertion is true, then the code
still conforms (it uses a subset of IEEE 754) and there is no bug.
If the
assertion is false, then they get undefined behaviour. In neither
case is
the compiler broken.
I addressed this point previously, but the argument was ignored.
The behavior influenced by the flag is, for very good reasons, very
easily /testable/, via is_iec559.
Which means that with a standard conforming compiler code can safely
rely on testing whether the behavior is present, or not, e.g. for the
purpose of deciding whether to returng NaN or throwing an exception,
or for use in a compile time assertion.
However, with the compilers mentioned in an earlier thread, including g
++, code can *not* reliably test the behavior.
And the reason for that is that the standard's absolute requirement on
is_iec559, that the type conforms to IEEE 754, is broken, which means
that the compiler, or rather its standard library implementation, has
a bug.
It's similar to the flags which disable runtime type information, or
exception handling. The flag tells the compiler that you don't use the
feature you disabled. It's not the compiler's fault if you lied to it.
The standard library does not provide ease of testing for presence of
RTTI or exception handling.
The correct similarity is not with RTTI or exceptions, but rather with
something that the standard library provides ease of testing for. For
example, consider a promise to only use ASCII characters, and let
numeric_limits<char>::is_signed be an arbitrary value. It's there to
enable the code to /test/ for the behavior.
The standard's requirements can't be circumvented by documenting the
non-conformance as a promise to not use a feature, to only use ASCII
characters.
If you think about a standard conformance test suite for C++, which
unfortunately we don't have, you see this easily.
For a test suite is not influenced by (dis-)ingenious documentation
wording.
Since the compiler defines a preprocessor symbol to indicate this
optimization, it's very easy to fix the bug: just a conditional in
the <limits> source code.
I'd have thought the compiler flag could be set on some compilation
units
and not on others. You'd set it on the compilation units which (a) had
performance-critical floating-point code, and (b) had been inspected
to
verify they didn't use NaNs. You would lead the flag off for other
compilation units to save the bother of checking them (and because
they
might be 3rd party libraries to which you don't have clean source).
We can't have is_iec559 true in some compilation units and false in
others, because that would violate the One Definition Rule. So we
can't
use conditional compilation in <limits>. The compiler has to pick a
value,
and it would be a shame it if had to use false for all compilation
units
if just one compilation unit used the flag.
This is a very good point, thanks! I hadn't thought of that, and it
means that what was implied by, and what I actually was thinking of
for, my proposed solution, was Very UnGood(TM). But now that I do
think about it, the proposed solution is still good: simply let the
conditional in [limits] be that is_iec559 is not defined at all, an
explicit non-conformance that makes sense, instead of a hidden one
that doesn't. And your point raises two issues.
First, the standard's means of testing for IEEE 754 is a little
impractical since it's all-or-nothing -- some way of checking just
for representation conformance would help.
Second, a compiler option to change the semantics of type is in
reality an option to make that type a different type.
With the use of this option in some compilation units and not in
others one are effectively using the same type name for two different
types, where, for example, standard C++ code that inadvertently passes
a NaN into a routine in a compilation unit compiled with non-IEEE 754
option, can really cause havoc.
Different types should ideally have different names. :-)
Then the logical constraints can also be enforced at compile time,
which is what the strong type checking of C++ is all about.
For example, providing implicit conversion from __fast_double to
double, but requiring explicit conversion from double to __fast_double.
So it seems to me that what these compilers do is entirely
reasonable and
logical, given that the optimisation from the flag is worth having.
It means that code can not reliably test for IEEE 754 semantics.
I agree that e.g. g++'s implementation of the optimization is worth
having when the only alternative would be to not be able to do such
optimizations.
But that is not the only alternative.
Above I outlined two better alternatives: making the compiler
completely conforming + more safe, by adding a __fast_double type, or
making the non-conformance very explicit and removing the lie, by not
defining is_iec559 in a compilation unit that uses the not-IEEE 754
fast arithmetic option.
It seems to me that the latter should be very easy to do.
Cheers, & thanks,
- Alf
--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]