Re: mixed-sign arithmetic and auto
James Dennett wrote:
You don't even implement RVO?
Of course D does RVO! (I know all about RVO, in fact, I was the inventor
of named return value optimization back around 1990 or so.)
Or re-ordering of operations between sequence points?
Many reorderings do not break code.
Do you always force
the FPU to discard additional information that would not be
stored to memory?
Do you mean discarding the 80 bit internal value when writing to 64
bits? The only time this ever becomes an issue is with test code that is
requiring less precision.
Users complain of that breaking code on x86 all the time.
I'm on the front lines with this, doing tech support for the compiler.
I don't agree. C++ is loaded with unnecessary complexity (*) that takes
years to master. Why not spend that effort learning how to be a better
programmer? Who actually understands two level lookup?
Many people understand two phase lookup;
Many people do, out of millions who don't. I go to C++ conferences and
talk to the C++ experts. I get bug reports from experts. I can guarantee
you that many experts do not understand it.
nobody really understood the previous status quo well enough.
That wasn't why two level lookup was specified. The previous status quo
was a problem because it wasn't a status quo, every compiler did it
differently. (They still do it differently, but at least now there's a
standard one can point to.)
The irony is that D does name lookup in a straightforward manner, and
many C++ people find that hard to understand because they keep looking
for the rat.
Mastering
D will take years, when it stabilizes enough to stop being a
moving target.
D 1.0 is not a moving target. It's been feature frozen for over a year
now, and only received maintenance fixes. People using D for production
work use D 1.0. D 2.0 is a moving target much like C++0x is, and for the
same reasons.
It's a little inconsistent to argue this point, and later on argue as if
C++0x was a standard and that non-standard C++ compiler extensions are
part of C++.
C++: int sizes variable, typedef sizes fixed
D: int sizes fixed, typedef sizes variable
But in D, you have no built-in support for requesting "the fastest
integral type please"
Yes:
std.stdint.int_fastNN_t, where NN is one of (8,16,32,64)
So the fastest is std.stdint.int_fast8_t then? Good to know.
Is that usually a 32-bit type on current hardware?
Yes.
or "the smallest of at least 48 bits"
Yes:
std.stdint.int_leastNN_t, where NN is one of (8,16,32,64)
48 is not one of those. So programs have to ask for at least
64 bits, even if the hardware had an efficient 48 bit type but
no efficient 64-bit type.
There is no minimum 48 bit type and no minimum 64 bit type in C++. There
is no efficient 64 bit type in C++.
(except
in that D doesn't support types between 32 and 64 bits, as far as
I understand).
C++ has no standard way of saying "at least 48 bits", either.
C++0x does, as does C99.
C++0x is a paper airplane (there are no C++0x compilers and won't be for
years), and there is no "at least 48 bits" proposal in the works that
I'm aware of. C99 has no required "at least 48 bits" type.
(It wouldn't be fair to compare an
unstandardized D to the last ISO standard for C++, which is
essentially from 1998. More reasonable to compare D from the
21st century to 21-st century C++, no?)
D 1.0 is standardized and compilers exist for it, and I agree it
wouldn't be reasonable to compare that with C++0x which isn't even
defined yet and conforming compilers are years away, nor is it
reasonable (for this discussion) to bring up non-standard C++ compiler
extensions.
D supports more integer types than C++ does. C++ has no requirement for
a 64 bit integral type, for example.
C++0x does support an integral type of at least 64 bits,
C++0x is not a standard and no conforming implementations exist. Saying
what it supports when you're looking to write code today is premature.
as does C99,
You're right, C99 does support 64 longs. C++ does not. C99 has other
basic types supported that aren't even proposed for C++0x.
and implementations can and do support sizes which aren't in
{8,16,32,64}.
Those are non-standard extensions. "long long" is not a reliable
extension, VC calls it __int64, for example. It really isn't consistent
to argue that C++ is a standard and D isn't and then argue that C++ is
better because some compilers have non-standard extensions?
D might mandate more than C++98, but not more than
C++0x, and D disallows types which are supported by C++ compilers.
What type does D disallow that is supported by standard C++ compilers?
Or even non-standard ones <g> ?
No; my point was not that it doesn't come up often, but that
the example you described NEVER occurs because it is mathematically
impossible. An 8-bit signed char does not have values in the
range you cite as problematic.
char c;
while ((c = getc()) != EOF)
...
--------
Walter Bright
http://www.digitalmars.com
C, C++, D programming language compilers
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]