Re: Preventing Denial of Service Attack In IPC Serialization
jaibuduvin@gmail.com (Le Chaud Lapin) wrote (abridged):
I would also like to assert for the recorded that I am all but
convinced that the only survivable (not perfect, but workable)
_requires_ participation by the objects themselves to limit
resources. Before I was only vaguely suspicious. Now I am certain.
Jarl's method ignores participation by the objects.
Here's what I got out of the discussion. The core idea was that the
amount of memory allocated should be roughly proportional to the amount
of data sent over the link.
Given that, we can control the denial of service attacks by placing a
limit on the amount of data the link accepts.
One way to do this is by using packets and buffers. It seems to me that
it could also be done by keeping a count of the number of bytes received
and checking that it never gets too big. This way would avoid the
overhead of extra data copies and memory allocations of a packet-oriented
approach. However the limit is enforced, it is a property or policy of
the socket, not of the generic serialisation framework.
The responsibility of the serialisation framework is to ensure that the
core idea is true. It needs to avoid allocating more memory than the
socket is allowed to receive, and preferably not more memory than the
socket actually has received. The socket needs to make that information
available. So I'd expect code like:
size_t size;
ar >> size;
size_t limit = ar.bytes_allowed() / sizeof(T);
vec.reserve( min( size, limit ) ); for (size_t i = 0; i < size;
++i) {
vec.resize( i+1 );
ar >> vec.back();
}
We need a reasonable policy for bytes_allowed(). I think it should
probably return at least 1024 even if not that many bytes are available
in buffers yet. The bigger it is, the less reallocation and copying but
the more vulnerability to denial of service.
Given a reasonably large bytes_allowed(), many vectors will not need to
reallocate and copy their memory at all, because their final size will be
small enough. Bigger ones will have some overhead, but it will be limited
by their logarithmic growth policy and because they are given a
reasonably large capacity to start with.
This approach is orthogonal to your push/pop_limit scheme. Both can
co-exist. One problem I have with your scheme is that it says, in effect,
that one data structure is allowed to allocate more memory than another.
This is wrong because it should not be a property of the data structure,
but of the socket. A denial-of-service attacker will seek out the trusted
data structures to exploit.
It also quite low-level. It requires us to count up the number of bytes
each structure might need. But we don't actually care about this. We want
to limit the /total/ amount of memory allocated across /all/ the data
structures, not the amount used by any specific one of them. It's at the
wrong level of abstraction.
If it has to be done at that level, I'd suggest having push_limit()
affect the result of bytes_allowed(). Pushing a big number would
temporarily increase efficiency (by reducing vector reallocations) at the
cost of increased vulnerability. That trade-off seems unavoidable to me.
To be honest, it seems like a hack; like saying we care more about
efficiency than security.
If you use different limits for different structures the hackers will
seek out the structures with the biggest limits and focus their attacks
there, ignoring the rest. So you might as well give the rest the same
limit; you don't increase vulnerability by doing so, as long as the
socket itself limits the total number of bytes received.
My impression is that you have picked up on inefficiencies of specific
implementations. I have tried above to emphasis the core ideas, and show
how different implementations can reduce or avoid inefficiencies.
Another variation, if you are still bothered by the cost of growing
vectors incrementally, is to replace the vector code with something like:
size_t size;
ar >> size;
if (size > ar.bytes_allowed() / sizeof(T))
throw "too big";
vec.resize( size );
for (size_t i = 0; i < size; ++i)
ar >> vec[i];
This treats bytes_allowed() as a hard limit. It seems to me it is less
flexible without being significantly more efficient or more secure, for a
given result from bytes_allowed(). (In practice we will probably need to
compensate for the loss of flexibility by increasing bytes_allowed(), so
it will end up being less secure.)
-- Dave Harris, Nottingham, UK.
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]