On Jul 11, 10:56 am, Le Chaud Lapin <jaibudu...@gmail.com> wrote:
As Jeff mentioned in one of his earlier posts (I cannot find it), it
allows the receiver to tell more quickly if the sender is trying to
induce DoS. For example, if at some deeply nested level, a string is
being serialized into, and that particular string as a byte-limit of
say, 512 bytes, and the sender is declaring that the string is going
to be 4MB, then the receiver can immediately throw an exception
because the limit will be breached. Note that the exception is thrown
before any memory allocation of any kind at the receiver.
I'm not convinced this is a good trade off between what is
required of users -- specifying a limit on every type --
and what it accomplishes.
The macro limit and keeping track of how much of the message
is remaining can prevent a DoS, but don't require users to
try to micromanage the process. This approach could, as
you mention, take longer to figure out it has been fooled,
but it would not result in a DoS and it would be more
flexible from a development standpoint. It is possible
to extend this approach to get more control if need be...
each variable-length, high level object in a message could
be prefixed by it's length. For example, if a message
consists of a vector<char> and a deque<int> the structure
would be
total message length
length of first object
first object data
length of second object
second object data
On the receiving side, the framework could keep track
of the average size of each of the high level objects
over time. If the first object is on average 10% of
the total and never been more than 18%, but now it is
supposedly 99%, there is reason to be suspicious.
This approach wouldn't require difficult guess
work for every type that has instances marshalled.
stopping the program or letting it continue. It truly is a binary
decision.
a function of the position in the vector/array/list, etc.
[ comp.lang.c++.moderated. First time posters: Do this! ]