Re: Bulk Array Element Allocation, is it faster?
Lew schrieb:
You can't "optimize" allocation of n references
to also create a block of n instances. The optimizer
> cannot know that that is the intended use of the array.
The following optimization works independent of
the use of the array. You can even nullify an
array element or set it to a newly created object.
So going from an unoptimized code:
for (int i=0; i<n; i++) {
lock heap
bla[i] = new Bla(i);
unlock heap
}
To the following:
lock heap
p = allocate size n*X
unlock heap
for (int i=0; i<n; i++) {
bla[i] = init p;
p += x;
}
Shouldn't be a problem for a decent compilers.
Various compilers do much more with loops. Why
do you doubt that this optimization is possible.
> For the few cases where block allocation
> *might* provide negligible speedup.
Oh, you don't really doubt that it is possible.
You rather doubt that it is worth. Well I
can tell you, in my application, the allocation
of small arrays with preallocated empty constructor
less initialized objects is done over and over.
I have recently run visual vm. It is really
what is mostly happening in the app during
the process of solving a problem.
And it seems that since it is so dominant any
changes in the way this is done are directly
seen in the timings. Here are the timings
again:
OS JDK Arch Bulk Lazy Delta %
Win 1.7 64bit 8'159 8'975 10.0%
Win 1.6 64bit 8'771 9'805 11.8%
Win 1.6 32bit 14'587 14'744 1.1%
Win 1.5 32bit 17'139 17'405 1.6%
Mac 1.6 64bit 11'003 12'363 12.4%
Unix 1.6 32bit 26'517 26'858 1.3%
On 64bit the bulk is 10% faster for the
present application that makes heavy use
of allocating these objects and arrays.
> As people keep pointing out in this conversation.
I know, it is logical that if it were that
the application would not make heavy use
of allocating these objects and arrays, then
nothing would be seen. But if nothing would
be seen at all, I wouldn't post here and
asking what is going on. But something is
seen, the 10%. It is not "nothing".
So what is going on in the 64bit?
Bye
Bye