Re: beginner question about storing binary data in buffers, seeing
binary data in a variable, etc
* James Kanze:
On Jul 4, 11:35 pm, "Alf P. Steinbach" <al...@start.no> wrote:
* darren:
Im working on an assignment that has me store data in a
buffer to be sent over the network. I'm ignorant about how
C++ stores data in an array, and types in general.
A textbook would be good resource.
Most probably don't say anything about it, because it's
unspecified. Except for the specifications of how pointer
arithmetic works within arrays.
I'm sorry, but while data storage is not completely specified, it's not unspecified.
The rules for pointer arithmetic are part of that specification.
The standard also imposes requirements on what order data are stored in arrays
and structures.
[...]
I guess my question here is why do most buffers seem to be
implemented as char arrays?
Are they?
Transmission buffers, yes. char[] or unsigned char[] are really
your only two choices. (I generally use unsigned char, but the
C++ standard does try to make char viable as well. And on most
typical architectures, where converting between char and
unsigned char doesn't change the bit pattern, both work equally
well in practice.)
Again, sorry, but it's not always necessary to use char buffers. At some level
the data will be treated as just bytes, but that level need not be your C++ code
(e.g. MFC serialization is a counter-example). I think perhaps you had in mind
three unmentioned constraints, namely (1) that this is a lowest level that's
implemented in C++, (2) portable code, and (3) heterogenous network with
arbitrary client on other end.
Can any binary value between 0 and 255 be safely put into a
char array slot (00000000 to 11111111).
It depends what you're really asking.
If you provided a concrete example and what you expected as
result, one could say whether that was correct or not.
But a char has guranteed at least 256 possible bitpatterns
(minimum 8 bits), yes.
On the other hand, I think in theory, char could be signed 1's
complement, and assigning a negative 0 (0xFF) could force it to
possitive (which would mean that you could never get 0xFF by
assignment---but you could memcpy it in). I think: I'm too lazy
to verify in the standard, and of course, any implementation
that actually did this would break so much code as to be
unviable.
In that case any implementation for 1's complement that used 1's complement also
for signed 'char' would be unviable... :-)
It makes an interesting case for dropping that support in the standard, and go
for requirement of two's complement for all signed integral types.
Why not implement a buffer using uint8_t ?
That's not presently a standard C++ type.
It's still a viable alternative.
Yes, and the main reason for "why not" is that it's not a standard C++ type.
It's possible to write code for binary network protocols in a
perfectly portable manner. It's rarely worth it, since it
entails some very complex work arounds for what are, in the end,
very rare and exotic machines that most of us don't have to deal
with. Thus, I know that much of the networking software I write
professionally will fail on a machine with an unusual convertion
of unsigned to signed (i.e. which isn't 2's complement, and
doesn't just use the underlying bit pattern).
Tune up the warning level, perhaps? <g>
Cheers,
- Alf
--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?