[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Random serf checkout failures

From: Ivan Zhakov <ivan_at_visualsvn.com>
Date: Wed, 7 Nov 2012 01:41:16 +0400

On Tue, Nov 6, 2012 at 10:24 PM, Ivan Zhakov <ivan_at_visualsvn.com> wrote:
> On Tue, Nov 6, 2012 at 9:13 PM, Lieven Govaerts <lgo_at_mobsol.be> wrote:
>> Hi,
>> On Tue, Nov 6, 2012 at 4:50 PM, Lieven Govaerts <lgo_at_mobsol.be> wrote:
>>> Ben,
>>> On Tue, Nov 6, 2012 at 4:09 PM, Ben Reser <ben_at_reser.org> wrote:
>>>> I worked with Philip today and was able to reproduce the exact problem
>>>> he's been seeing. I ended up having to get his full httpd.conf to
>>>> figure it out..
>>>> Ultimately the problem proved to be that he had this directive:
>>>> Timeout 3
>>>> Which would mean if we don't tend a connection for 3 seconds Apache
>>>> will close it. Serf should be able to deal with the connection being
>>>> closed.
>> okay, so with the Timeout directive added I can reproduce this issue.
>> What I see is that the server closes the connection in the middle of
>> sending a response to the client. It doesn't even finalize the
>> response first.
>> So ra_serf is reading the data from the response bucket, but gets an
>> APR_EOF when it needs more data than specified in the Content-Length
>> header of the response.
>> What is the expected behavior here, let serf close down the connection
>> and try the request again on a new connection?
> I think no. Timeout 3 directive means "abort the connection if client
> didn't read data within 3 seconds"
> So most likely reason the client is busy with doing something for a
> long time without reading data from this network connection.
> Probably it's related to the issue I found today:
> 1. We reading REPORT faster than complete PROPFINDs/GETs
> 2. Data from REPORT response going to spillbuf: 1mb going to memory,
> others stored to disk.
> 3. After completing some PROPFINDs/GETs serf update editor resumes
> parsing data from spillbuf: 1mb in one chunk
> Two problems comes here:
> 1. When parsing such big block we do not read data from network,
> leading timeouts in some cases
> 2. All requests created while parsing this chunk is created in one
> connection, that ´╗┐´╗┐slows ra_serf
Another problem is how serf reads data from network in case of
multiple connections: it reads data from one connection until EAGAIN.
But if data comes from network really fast (from local server for
example) it continue reading data from this connection, without
reading data from other connection! Which leads time out them.

See outgoing.c:read_from_connection()

Ivan Zhakov
Received on 2012-11-06 22:42:06 CET

This is an archived mail posted to the Subversion Dev mailing list.