[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: [serf-dev] Re: [PATCH] serf/ra_serf: add cansend callback to stop writing requests untill authn handshake is finished

From: Lieven Govaerts <lieven.govaerts_at_gmail.com>
Date: 2007-08-28 13:25:34 CEST


thanks for the review. Inline some comments.

On 8/28/07, Ivan Zhakov <chemodax@gmail.com> wrote:
> On 7/25/07, Lieven Govaerts <lgo@mobsol.be> wrote:
> > Attached two patches implement a new callback 'cansend' in serf which is
> > used in ra_serf to hold sending a bunch of requests before the NTLM
> > authentication handshake is finished.
> >
> > We need this callback in this scenario: consider an apache setup with
> > NTLM authentication MaxRequestsPerChild set to 100. Now use svn to
> > checkout a directory with more than 50 files. Serf will make 100
> > requests (PROPFIND+GET) for all those files and sends them on one of the
> > connections. What happens is that the before the NTLM handshake is
> > finished, the connection will already max out on the number of requests.
> >
> I spend some time reviewing your patches and trying to understand how to
> fix this problem. I didn't come to final decision, but I've some thoughts.
> Sending first request, complete authentication and then send other
> request isn't bullet proof solution. Just imagine if first request
> isn't require authentication at all, but following ones want it?

The authentication step is only required once per connection, whether
that is from the first request or later, it shouldn't make a
difference. If a 401 is only received after 10 requests, than the next
request added to the pipeline will contain the NTLM auth phrase. No
other requests are sent until a response is received from the server.
If there were 50 requests still on the pipeline than those will all
get a 401 response and retried after the authentication has finished.
I'm not sure if the patch currently works like this, I'll test it.

> Also sending 100 pipelined requests isn't good idea for me. I think we have to
> implement limit for maximum number of concurrent pipelined requests as other
> clients do. Mozilla Firefox has limit to 4 pipelined requests for example.

Why do you don't think it ain't a good idea? What is the benefit of
introducing this limit? It's not like pipelining requests are adding
extra load to the server, they're just sent more efficiently.

I found this blog post concerning the decision to add pipeling support
to firefox:
It says: '3) We limit the number of requests in a pipeline to minimize
the effects of head-of-line blocking. Mozilla uses a default value of
4, but any value up to the hard-coded limit of 8 is possible.'

Now I think the svn client and Firefox have a different goal here:
Firefox parallelizes requests as much as possible (within certain
limits), because there's a value in downloading each individual image
as fast as possible. In svn we only care for the final result.

> Actually MaxRequestsPerChild limits number of connections per child
> [1]. Number of requests per connection controlled by
> MaxKeepAliveRequests [2].
> [1] http://httpd.apache.org/docs/2.0/mod/mpm_common.html#maxrequestsperchild
> [2] http://httpd.apache.org/docs/2.0/mod/core.html#maxkeepaliverequests

serf currently already tries to estimate the MaxKeepAliveRequests
based on the number of received responses before a connection is reset
by the server, to limit the number of requests on the pipeline.

> > Note2: I propose you add 'svn:eol-style' 'native' to the serf source
> > files, makes it a bit easier to patch them on Windows.
> >
> PS: I've changed svn:eol-style to native in all serf sources files.

I noticed, thanks.


To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Wed Aug 29 17:42:44 2007

This is an archived mail posted to the Subversion Dev mailing list.