Re: Post Increment Operator ambiguity??
Seungbeom Kim wrote:
It seems so natural to me that I doubt whether you'll find a serious
study about it. Someone recently in this group also gave a very concrete
example of how the ordering of evaluation can affect the performance;
didn't you see it?
There are various examples in the thread. I will be most appreciative if
you could give me a specific post to look at, so we both know exactly
what you are referring to.
I think it's a very reasonable thing to question. The downside of an
undefined order of evaluation is, it is not reliably possible to tell if
an expression has side effects or not.
Whether an expression has a side effect or not doesn't depend on the
evaluation order.
True. My point was that the compiler will not always be able to issue a
diagnostic that an expression a dependence on order of evaluation. This
makes for a potential bug that a) cannot be mechanically diagnosed and
b) cannot be discovered by testing. I view this as an impediment to
writing reliable, robust software.
Therefore, one can have
thoroughly tested and working code suddenly mysteriously fail when
ported or when the compiler is updated or even when different switches
to the compiler are used.
I don't even need testing to find and eliminate such a fragile code;
I never write one in the first place. Even without actually running the
code, a code review can catch such errors.
I used to work on flight controls design for Boeing airliners. It turns
out that if the hydraulic lines to the elevator actuator are reversed,
the airplane will crash. So what does Boeing design do about this?
1) in and out fittings/lines are color coded
2) in and out fittings are different line sizes
3) one side has left hand threads, the other side has right hand threads
4) the lines physically cannot be moved to the wrong fitting
5) the lines are not long enough to be connected to the wrong fitting
6) everything gets inspected by different people
7) the pilot is required to check for correct movement before every flight
Nothing is left to chance. Everything is assumed to be wrong until
proven correct, over and over. There is no reliance upon a single
mechanic who must get it right every time every day and isn't allowed to
ever have a bad day.
Let me put it another way. Have you ever written code that got a syntax
error the first time it was run through a compiler? If not, my
congratulations. If so, then I submit that you are not above making
simple errors, and any human code review team is going to make them, too.
If I, as a manager, had a job where I absolutely had to deliver bug free
software, you bet I'd use as many mechanical bug finding tools as
possible. What I wouldn't do is bet my life on my programmers never
making a mistake. I don't care how good they are.
Java does have a defined order of evaluation. I've never heard anyone
comparing C++ efficiency with Java make any comment that Java was
weighed down by this constraint.
Because Java is already slower with the JVM interpreting the bytecode.
JVM's have been using JIT compilers to execute native code for 12 years now.
Java is based on a different philosophy, and it makes different
trade-offs. It does a lot run-time checking, and it even specifies the
size of int. No wonder it has taken a different approach.
Nevertheless, enormous attention has been paid to the issue of Java
runtime performance, yet I've never seen the order-of-evaluation issue
ever even mentioned. If it affected performance, I would expect people
to talk about it.
----
Walter Bright
Digital Mars - C, C++, D programming language compilers
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]