Re: Java Serializing in C/C++?
On 1 mar, 03:43, gpderetta <gpdere...@gmail.com> wrote:
On Mar 1, 1:15 am, James Kanze <james.ka...@gmail.com> wrote:
[...]
If you're reading from a Unix pipe, you can't attempt a read
unless you know that all of the bytes you attempt to read will
be part of the message. If you try to read 100 bytes from a
Unix pipe, your process will block until it has actually read
100 bytes (or the pipe was closed on the write side, or you get
a signal, or a couple of other things which don't concern us
here). If the message only contains 80 more bytes, then you
will block until another message has been sent.
Hum, the POSIX documentation about read says:
"[...]
The value returned may be less than nbyte if the number of bytes left
in the file is less than nbyte, if the read() request was interrupted
by a signal, or if the file is a pipe or FIFO or special file and has
fewer than nbyte bytes immediately available for reading. For example,
a read() from a file associated with a terminal may return one typed
line of data."
So it should block only if there were *no* message to read in the
first place.
It didn't work in classical Unix:-).
Pipes are a fairly complicated under modern Unix, for historical
reasons; most modern Unix implement them using streams (That's
Unix streams, not C++ iostreams, of course.) Which means that an
application *can* write to them using send, and they *can* block
the stream into messages, in which case, they may (and in fact
will) always return at the end of a message. In practice, that
doesn't correspond to the usual use, however. A system might
also treat each individual "write" as a message (Linux seems
to), although this isn't required, and doesn't correspond to the
historical behavior.
Asking for more bytes than are really available won't block
and simply will return what is available.
Provided at least 1 is available, of course.
That's still not really ideal, since it may return more than one
message.
The spec say 'may' and not must, so probably a posix
conformant system may legally block even if there are some
bytes available, but do real systems do that? I guess many
applications would break...
Such as... In classical Unix, a pipe would block, and I imagine
most real systems that use pipes use them in the "classical"
way; otherwise, they'd probably use sockets.
I think that the trick described by Brian Wood, that is,
keeping unconsumed read data in a buffer just in case you over
read, should actually work in practice.
I don't know. I'll admit that I'm basing my statements on
somewhat dated experience -- in the 1980's and early 1990's,
pipes would block in such cases. Since then, the only times
I've used pipes is when one process is streaming data (no real
records) to another. Any time I've needed records, I've used
sockets or named pipes (which are a form of sockets). So maybe
I'm taking avoidance procedures for something that isn't a
problem any more.
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34