[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Random serf checkout failures

From: Ivan Zhakov <ivan_at_visualsvn.com>
Date: Tue, 6 Nov 2012 22:45:33 +0400

On Tue, Nov 6, 2012 at 10:24 PM, Ivan Zhakov <ivan_at_visualsvn.com> wrote:
> On Tue, Nov 6, 2012 at 9:13 PM, Lieven Govaerts <lgo_at_mobsol.be> wrote:
>> Hi,
>> On Tue, Nov 6, 2012 at 4:50 PM, Lieven Govaerts <lgo_at_mobsol.be> wrote:
>>> Ben,
>>> On Tue, Nov 6, 2012 at 4:09 PM, Ben Reser <ben_at_reser.org> wrote:
>>>> I worked with Philip today and was able to reproduce the exact problem
>>>> he's been seeing. I ended up having to get his full httpd.conf to
>>>> figure it out..
>>>> Ultimately the problem proved to be that he had this directive:
>>>> Timeout 3
>>>> Which would mean if we don't tend a connection for 3 seconds Apache
>>>> will close it. Serf should be able to deal with the connection being
>>>> closed.
>> okay, so with the Timeout directive added I can reproduce this issue.
>> What I see is that the server closes the connection in the middle of
>> sending a response to the client. It doesn't even finalize the
>> response first.
>> So ra_serf is reading the data from the response bucket, but gets an
>> APR_EOF when it needs more data than specified in the Content-Length
>> header of the response.
>> What is the expected behavior here, let serf close down the connection
>> and try the request again on a new connection?
> I think no. Timeout 3 directive means "abort the connection if client
> didn't read data within 3 seconds"
> So most likely reason the client is busy with doing something for a
> long time without reading data from this network connection.
> Probably it's related to the issue I found today:
> 1. We reading REPORT faster than complete PROPFINDs/GETs
> 2. Data from REPORT response going to spillbuf: 1mb going to memory,
> others stored to disk.
> 3. After completing some PROPFINDs/GETs serf update editor resumes
> parsing data from spillbuf: 1mb in one chunk
> Two problems comes here:
> 1. When parsing such big block we do not read data from network,
> leading timeouts in some cases
> 2. All requests created while parsing this chunk is created in one
> connection, that ´╗┐´╗┐slows ra_serf
There is comment in
  /* ### it is possible that the XML parsing of the pending content is
     ### so slow, and that we don't return to reading the connection
     ### fast enough... that the server will disconnect us. right now,
     ### that is highly improbable, but is noted for future's sake.
     ### should that ever happen, the loops in this function can simply
     ### terminate after N seconds. */

Ivan Zhakov
Received on 2012-11-06 19:46:26 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.