Re: Preventing Denial of Service Attack In IPC Serialization
On Jul 8, 8:50 am, Jeff Koftinoff <jeff.koftin...@gmail.com> wrote:
On Jul 7, 2:48 pm, brang...@ntlworld.com (Dave Harris) wrote:
<snip>
This approach is orthogonal to your push/pop_limit scheme. Both can
co-exist. One problem I have with your scheme is that it says, in effect,
that one data structure is allowed to allocate more memory than another.
This is wrong because it should not be a property of the data structure,
but of the socket. A denial-of-service attacker will seek out the trusted
data structures to exploit.
Hi Dave.
I have a different take on this. In the case of the ASIO
serialization example, the 'stocks_' member is serialized/
deserialized.
Most definitely, the 'std::string code' member of the stocks structure
ought to have a small limit, and the 'stocks_' item which is a
'std::vector< stocks >', would have a different limit in terms of
maximum stock items. But that's just my personal preference.
Yep. And it not a preference, it is a necessity. If the amount of
data that is to be consumed by the same std::string class in two
different contexts as part of two different objects being read in, the
the limit on data should be defined dynamically. Pick a value to low,
and you will starve the object. Pick a value to high, and DoS will
rear its head. Only the object, in its particular context, knows best
how much data is to be read.
These containers, std::string, std::vector<>, std::list<>,
std::set<>, etc., cannot specify their own limits. Whatever limit
they choose will be inappropriate, by definition. Only the objects,
or the naked code surrounding them can specify the limit. However, in
the case of an object Foo, that contains as members a std::string or
std::vector<> or std::list<>, or std::set<>, or any combination
thereof, then it would best be Foo's responsibility to specify the
limit.
If Foo is naturally and legitimately a 16MB, as is the case in a data
structure that I serialize today that has around 8000 elements in it,
each of which is a complex object, one cannot pre-allocate a buffer.
It would not only be wasteful, it would serve to reintroduce the
problem that it was trying to prevent.
In your example, the sizeof(T) does not include the size of each
individual stock 'code' and stock 'name' strings, which themselves
have unlimited variable length unless the vector deserialization code
could have some way of telling 'ar >> vec[i]' how much to allow as
well.
There are some who might think of a placing a static constant value
inside each object and letting the container containing the objects
extract the constant and "relieve the object" of having to figure out
what its individual limit is. I would like to preemptively say that
it is a bad idea.
-Le Chaud Lapin-
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]