Re: Signed/Unsigned Conversion
* Andrew Koenig:
<andrew.bell.ia@gmail.com> wrote in message
news:1177441809.619435.160970@c18g2000prb.googlegroups.com...
If the above it true, is there any clean way of portably converting an
unsigned int to a signed int, preserving all bits from the source in
the destination? I understand that the usual implementation of a
static cast from unsigned int to int would be to work as I have
described, but I like to be sure.
If your machine uses 2's complement notation, there are two cases:
1) The high-order bit of the number is off. In that case, you can
just
convert it.
2) The high-order bit of the number is on. In that case, turn off
the
bit, convert it, and subtract the bit you turned off.
In other words: Suppose maxneg holds the most negative integer of your
desired type (i.e. -2147483648 for a 32-bit integer). Suppose further
that
s is a variable of your given signed type and u is a variable of the
corresponding unsigned type. Then the following (untested code) should do
it:
s = u & ~maxneg; // Turn off the high-order bit
if (u & maxneg)
s += maxneg; // We add here because maxneg is negative.
For two's complement form
s = u
will in practice preserve the bits.
Simple, isn't it?
--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]