Re: Question about reading from stream.
On Mar 27, 8:31 pm, c...@mailvault.com wrote:
On Mar 27, 4:03 am, James Kanze <james.ka...@gmail.com> wrote:
On Mar 26, 8:44 pm, Victor Bazarov <v.Abaza...@comAcast.net> wrote:
[...]
It depends on the application. What if it's a compiler? Or a
Unix filter like grep or sed? For that matter, if clients share
no data directly, there are strong arguments for starting up a
new instance of a server for each connection;
I think the arguments in favor of a long running server are
stronger than those against it in the case of compilers.
The design and implementation have to be such that separate
requests do not interfere with each other. There are some
steps that you can take that help in that area, but don't
require anything like a new process and a completely fresh
start. Besides the basic efficiencies afforded, there's
a lot of basic information that doesn't change between
requests. Why rescan/prepare for <vector> billions of
times when it doesn't change one little bit? It surprises
me that you question this given what I know of your
background.
The contents of std::vector are data, not code, and don't
evolve. There's nothing wrong with having it cached somewhere,
maybe loaded by mmap, but just keeping a server up so that
compilations won't have to reread it seems a bit overkill.
(There's also the fact that formally, the effects of compiling
an include file depend on what macros are defined beforehand.
Even something like std::vector: the user doesn't have the right
to define any macros which might conflict, but most
implementations have two or more different versions, depending
on the settings of various macros.)
Anyway, my comments were, largely, based on the way things are,
rather than how they could be. I've not actually given the idea
of implementing a compiler as a server much thought, but today's
compilers are not implemented that way. I certainly don't want
to imply that things have to be like they are.
you don't want
start up time to be too long there, either.
It has to done efficiently. Single-run compilers are a
luxury that is fading. I harp on this, but gcc needs to
be overhauled twice. First a rewrite in C++ and then a
rewrite to be an on line compiler. The first phase of
the on line part could be to simply run once and exit
after each request. That though would have to be
replaced by a real server approach that runs like a
champ. They are so far away from this it isn't even
funny.
So are most of the other compiler implementers, as far as I
know.
I'm not too sure what the server approach would by us in most
cases, as opposed, say, to decent pre-compiled headers and
caching. (If I were implementing a compiler today, it would
definitely make intensive use of caching; as you say, there's no
point in reparsing std::vector everytime you compile.)
As far as I know all they are working on is
C++0X support. Some of that is important, too, but
they shouldn't keep ignoring these other matters.
It may be that gcc has just become a dinosaur that
can't adapt to the times. They certainly haven't
done a good job keeping up in some respects. Where's
the "gcc on line" page like some compilers have?
The "xxx on line" pages I know of for other compilers are just
front ends, which run the usual, batch mode compiler. It would
be trivial for someone to do this with g++---if Comeau wanted,
for example, I doubt that it would take Greg more than a half a
day to modify his page so you could use either his compiler or
g++ (for comparison purposes?).
I'm certainly in favor of making compilers accessible on-line,
but that's a different issue. No professional organization
would use such a compiler, except for test or comparison
purposes.
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34