On Fri, 1 May 2009, Arne Vajh?j wrote:
Mark Space wrote:
Arne Vajh?j wrote:
It does not matter. If milliseconds is not granular enough, then
the results are worthless on OS's like Windows and *nix.
currentTimeMillis(), on Windows at least, could use a very inaccurate
timer that doesn't even compute milliseconds correctly. It's always
off by like 18 milliseconds one way or the other (I'd have to look up
the exact value). nanoTime() isn't accurate to nanoseconds, but it's
usually accurate to at least 1 millisecond.
Sure.
But given the unpredictable scheduling of other processes, then you
can still not meaningful measure so small intervals.
Sure you can, you just have to do a lot of runs. This is why when i do
benchmarks, i do 100 runs, and ignore the five slowest, on the grounds
that they probably took so long because either another process was
scheduled in the middle, or GC kicked in or something. For this to be
effective, you actually want to keep the individual runs quite short, to
minimise the chance of them being disrupted, and do a huge number of
them, and for this, a high-resolution timer is essential.
1) That is actually very close to what I suggest. You are just making
add instead of a few long runs.
2) And you don't want to ignore GC time. Because GC will also hit the
real app.