Re: Pattern suggestion
On 15.04.2012 21:56, Martin Gregorie wrote:
On Sun, 15 Apr 2012 13:57:42 -0300, Arved Sandstrom wrote:
On 12-04-15 01:17 PM, Patricia Shanahan wrote:
On 4/15/2012 7:11 AM, FrenKy wrote:
Hi *,
I have a huge file (~10GB) which I'm reading line by line. Each line
has to be analyzed by many number of different analyzers. The problem
I have is that to make it at least a bit performance optimized due to
sometimes time consuming processing (usually because of delays due to
external interfaces) i would need to make it heavily multithreaded.
File should be read only once to reduce IO on disks.
So I need "1 driver to many workers" pattern where workers are
multithreaded.
I have a solution now based on Observable/Observer that I use (and it
works) but I'm not sure if it is the best way.
Observer does seem weird for this. Fork / join approaches seem much
more natural.
I suggest taking a look at java.util.concurrent.ThreadPoolExecutor and
related classes.
In the past we had a few issues with TPE because it seemed to try to be
too smart about resource usage (threads can die if min and max are set
differently). I don't remember the details but it might be worth
checking the official bug database.
Try to minimize ordering relationships between processing on the lines,
so that you can overlap work on multiple lines as much as possible.
+1
java.util.concurrent will definitely have something. It could well be
that the processing of each line is isolated, and I'd assuredly be
thinking of ThreadPoolExecutor or something similar for managing these.
It has a lot of tuning options including queues. If the analyzers for
each line have to coordinate (and maybe there's some final processing
after all complete) there are classes for that too, like CyclicBarrier.
Yes. Since the OP doesn't give any indication that you can decide that
the analysers needed for each line can be selected by some sort of fast,
simple inspection, about all you can do is:
foreach line l
foreach analyser a
start a thread for a(l)
wait for all threads to finish
Starting threads and waiting for their termination only makes sense if
all the results need to be processed together. If that's not the case
I'd rather use ThreadPoolExecutor (or something similar which is pretty
easily coded) and just throw tasks into the queue.
At first glance you might thinkusing a queue per analyser would help but,
with the data volumes quoted that will soon fall apart if any analyser is
more than trivially slower than the rest. As the OP has already said that
some analysers can be much slower due to external interface delays (I
presume that means waiting for DNS queries, etc.), I think he's stuck
with the sort of logic I sketched out. After processing has gotten under
way and any analyser-specific queues have filled up, the performance of
any more complex logic will degrade to the above long before the input
has been completely read and processed.
In summary, don't try to do anything more sophisticated than the above.
I would suggest a change depending on the answer to the question: is
there a fixed format of lines which needs to be parsed identical for
each analysis? If so I'd avoid multiple identical parse steps and write
a variant like this:
foreach line l
dat = parse(l)
foreach analyser a
start a thread for a(dat)
wait for all threads to finish
If results of analysis do not have to be aligned I'd simply do
queue = new TPE
foreach line l
dat = parse(l)
foreach analyzer a
queue.execute(a.task(dat))
queue.shutdown
while (!queue.awaitTermination(...)) {/* nop */}
Kind regards
robert
--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/