Re: speed performances / hardware / cpu
antoine wrote:
- I've profiled, and methods seem to be running effectively, I have not
one in particular taking up most of the processing (well, I do, but
it's an expected behavior, and it's not 99.999% against 0.001% for the
other processes). no huge memory waste anywhere. the goal here would be
to improve core speed.
Something still puzzles me: If the CPU is not 100% utilized,
or very nearly so, it follows that something else is retarding
the progress of your code. I/O, memory, internecine contention
for locks, you name it: But if there's spare CPU capacity lying
about unused, speeding up the CPU just leads to spending even
more time in the idle loop.
But perhaps I've misunderstood your description of what's
been measured. (You're clearly measuring things, and that's good,
but I'm not confident that I've brasped the measurements.)
- there are around 100 financial instruments monitored concurrently.
that means market prices (not updated all at the same time, as those
are callbacks from the "market"), but also theoretical variables
computed locally depending on other market values (themselves updated
through callbacks).
- basically it works like this: one specific update in market triggers
callback, that triggers global recomputation, variables are tested,
then may or may not trigger the sending of an order. I'm trying to
reduce the time between the reception of the update and the release of
the new order (it's currently between 2 and 8ms)
Knowing nothing about how your application is organized, and
at the risk of making an even greater fool of myself: Is a "global
recomputation" necessary? Consider the lowly spreadsheet: it toils
not, neither doth it spin, yet it's smart enough to react to a change
in cell C6 by recomputing only those other cells for which C6 is an
input, and those further cells that depend on the recomputed cells,
and so on. Perhaps you could keep track of these dependencies and
replace a "global recomputation" with a "local recomputation." Might
a tree of dependencies reduce the workload?
- I'm working on improvements on the code side, but feel I've reached a
limit. last time I changed my hardware was 2.5 years ago, and even
though Murphy's not been feeling too well recently, I still believe
there's some good to be taken by upgrading hardware. just trying to
figure out on which element I should focus the most. for the moment it
appears CPU & GPU number & clocking could help, but hey, maybe simply
"getting a regular, faster machine" will do the trick :-)
It might. In fact, it probably will improve matters to some
extent. But to what extent? 500% better, or just 50%, or merely 5%?
An informed economic decision requires that you have some idea of how
much bang your buck will buy.
--
Eric Sosman
esosman@acm-dot-org.invalid