Re: Preventing Denial of Service Attack In IPC Serialization
On Jul 5, 11:04 pm, c...@mailvault.com wrote:
On Jul 5, 8:11 am, Le Chaud Lapin <jaibudu...@gmail.com> wrote:
On Jul 4, 3:55 am, c...@mailvault.com wrote:
Jarl has answered what you are saying numerous times. He includes
a message size/header that applications can check to limit their
vulnerability to such an attack. It isn't difficult to keep track
of how many bytes remain in a message and pass that to a
load_collection/Receive function. The function uses that to
check the sanity of the count value that it gets.
Jarl also claimed that Jeff was "beating a dead horse", implying that
there is no problem.
Solutions aside, do you or do you not agree that, today, in 2007, 4 of
July, Boost Serialization is doing what I said we should avoid in my
OP
Do you agree that Boost Serialization is using an implementation that
is subject to DoS as I wrote in my original post?
Yes, and it should be changed to better defend against
the possibility. Jarl is the only one I'm aware of
who has a solution in place already and he deserves
credit for that.
I am glad that we can now all agree that Boost Serialization has a DoS
attack that is readily exploitable, along with any other serialization
framework that uses the same model outlined in this thread.
As far as giving credit to Jarl...I am the last person to want to deny
credit to someone for having an idea, even one that (IMO) is only
good, but not great. I was inclined last night to give Jarl credit
for his buffer technique...but after thinking more, I do not regard
this as a solution. The truth is that I feel that the minimum
standard that I would hold for a serialization framework is violated
by Jarl's method. In particular, the pre-allocation buffer is, IMO,
intolerable, not to mention the multiple-allocation methods that he
and others described. His method would fail on a PDA with only 64MB
of RAM, and I do run my serialization code on PDA's, and so would a
few others.
I would also like to assert for the recorded that I am all but
convinced that the only survivable (not perfect, but workable)
_requires_ participation by the objects themselves to limit
resources. Before I was only vaguely suspicious. Now I am certain.
Jarl's method ignores participation by the objects.
Finally, I am _not_ proposing my method as a solution. As I have said
many times before, my method, like Jarl's involves arbitrariness, and
arbitrariness (to this extent) is usually bad in engineering.
However, my method does not require pre-allocation of memory, nor does
it require superfluous reallocations why an object is being serialized
into. Any arbitrariness that my method has, Jarl's method will
certainly have.
That is why I would find it hard to give him credit for the buffer
method. I believe that my method, as weak as it is, is better than
what he proposed.
-Le Chaud Lapin-
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]