Re: [serf-dev] serf errors on responses bigger than 4GB
On Wed, Oct 1, 2014 at 12:48 PM, Philip Martin
> Andreas Stieger <andreas.stieger_at_gmx.de> writes:
>> will once again point to the serf issues below and httpd/network config.
> Andreas identified a bug in serf that causes decompression to fail when
> the compressed size is bigger than 4GB. This bug has been fixed on trunk
> but not in any release. This bug does not affect commit but does affect
> In my testing a commit of a 5GB /dev/urandom file over HTTP using serf
> 1.3.x works with compression both disabled and enabled. A checkout over
> HTTP using serf 1.3.x fails:
> svn: E120104: ra_serf: An error occurred during decompression
> I also tried the checkout with compression disabled by the client and
> saw the error:
> svn: E120106: ra_serf: The server sent a truncated HTTP response body.
> but this turned out to be the known mod_deflate memory leak causing the
> server to abort. With compression disabled on the server the
> uncompressed checkout works.
> Doing a search I see users reporting both the above serf errors. The
> way to fix the decompression error is to disable compression. This can
> be done on the client if the server is a recent 2.4 as it is not
> affected by the mod_deflate bug. If the server is older then a client
> disabling compression will probably cause the truncated error and the
> fix is to disable mod_deflate on the server or to revert to a 1.7/neon
> I merged r2419 to my 1.3.x build and it fixes the compressed checkout.
> Are there any plans for a serf release that includes this fix?
I've learned from earlier releases that (most) packagers won't upgrade
serf unless there's a svn release,
As a result, I plan a serf (patch) release right before a svn (patch)
release, but not earlier.
> Philip Martin | Subversion Committer
> WANdisco // *Non-Stop Data*
Received on 2014-10-01 13:04:06 CEST
This is an archived mail posted to the Subversion Dev