On Thu, Oct 28, 2010 at 09:30:49AM +0200, Erik Huelsmann wrote:
> >> @@ -593,7 +598,14 @@ static svn_error_t *read_string(svn_ra_s
> >> svn_ra_svn_item_t *item, apr_uint64_t len)
> >> {
> >> svn_stringbuf_t *stringbuf;
> >> - if (len > 0x100000)
> >> +
> >> + /* We should not use large strings in our protocol. However, we may
> >> + * receive a claim that a very long string is going to follow. In that
> >> + * case, we start small and wait for all that data to actually show up.
> >> + * This does not fully prevent DOS attacs but makes them harder (you
> >> + * have to actually send gigabytes of data).
> >
> > Wow, I hadn't even considered this. Once we get this on trunk, it
> > might make sense to propose a backport, since this has (potential?)
> > security implications.
>
>
> Actually, that was already released as a security vulnerability some
> years ago. The comment by Stefan makes it painfully apparent that it
> is, but I guess that's a good thing. Notice that he did nothing but
> name the constant and add the explanation. See
> http://subversion.apache.org/security/CAN-2004-0413-advisory.txt
>
> This is exactly the point I was talking about when I said properties
> are length-limited by ra_svn. (In relation to the maximum size of
> merge-tracking information.) The actual code is a little bit
> different than what I remembered., because it does seem to grow the
> buffer once it gets past the first MiB.
Can we define a constant like SVN_PROP_MAX_SIZE that says how large
a property can be? This would be useful in the patch code, for instance.
Right now, a patch with a large property value will cause out of memory
errors because the code has no way of knowing the maximum size of a
property value -- so it just keeps reading and allocating.
Also, do we have a plan for what to do when (not "if") mergeinfo in
repositories out there hits the limit?
Stefan
Received on 2010-10-28 14:24:54 CEST