Re: Question about reading from stream.
Vaclav Haisman wrote:
Carfield Yim wrote, On 25.3.2009 16:32:
HI all, we currently using following code to read a file that another
process continuously appending content ( like tail -f something to
process )
infile.seekg(_currentFilePointer);
infile.read(_buffer,_buffer_size) ;
_bytesLeftInBuffer = infile.gcount() ;
I suspect this in fact is very noneffective, what is the more prefer
way to read from a growing files? Use getc() ? But isn't that need to
loop many time?
If you are concerned about raw speed than do not use streams for IO, there
are many layers of abstraction that makes things less than spectacular. That
said, do you know that the IO is the bottleneck of your application? Avoid
premature optimization.
Those are good suggestions, and we all can agree that to optimize one
most often needs to measure first. But it does not take a measurement
to know that IO is a bottleneck. In every application. Hardware is
slow. And one needs to keep things like IO in mind when devising the
approach to serialization. Some optimization is not premature, like
picking quick sort over bubble sort: you don't need measurements for
that, you can use the measurements people have collected over the years.
On the flip side, once the sort is abstracted, one algorithm can
probably be replaced with another easily; so, to the OP: don't integrate
reading/writing into your code too tightly. Create an abstraction layer
so you can use a different method of serializing once you figure that
you might need that.
V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask