[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: ra_serf and connection authentication scheme's

From: Lieven Govaerts <svnlgo_at_mobsol.be>
Date: 2007-07-08 23:40:20 CEST

Justin Erenkrantz wrote:
> On 7/7/07, Lieven Govaerts <lgo@mobsol.be> wrote:
>> This doesn't work for NTLM because it's a connection authentication
>> scheme so:
>> 1. we can't copy authentication session information from a connection to
>> another, the server requires a new challenge/response cycle per
>> connection.
>> 2. we can't start pipelining requests until we have an authenticated
>> connection. If the first request fails with a 401, most likely all other
>> requests in the pipeline will fail with the same error code.
> Serf has the ability to retry the requests in this case with an
> authentication denied via the 'svn_ra_serf__request_create' call that
> is invoked after handle_auth returns without an error.
> IOW, I wouldn't worry - fix it up such that it properly negotiates the
> NTLM authentication and just ride out the storm of remaining 401s
> already sent until you get the challenge accepted. How do you know
> when the storm is over? Dunno - you'll have to get creative.
NTLM makes this is a bit more difficult than I hoped for. Fact is that
you can only have one challenge/response cycle going on per connection
at any time. If ra_serf sends the challenge headers twice, mod_auth_sspi
will report an error and mark the connection as invalid. I don't know if
this is normal behavior, but since mod_authz_sspi seems to be the most
common implementation I figure it's best that we support it.
Current ra_serf implementation stores the authentication header in the
connection object, and just copies them to each request being sent on
the connection. Due to the above we can't do this for NTLM so I had to
add an extra callback to set those headers on only one request. Not that
hacky in the end.

But this implementation seems to have some consequences:
* There's an abundance of failing requests. The first batch of requests
will fail with 401, then the first response is parsed and NTLM
challenge/response started but the whole batch will be tried again too.
It's only on the third try that the requests will succeed. It seems like
40 requests will fail before the first requests are succeeding. The
default of my apache configuration is max. 100 requests on a persistent
connection, so this seems like a lot of overhead to me.

* The responses are coming in out of order. I guess this is expected
behavior when pipelining operations, when no or basic authentication is
used the responses come in in the same order as the requests were sent.

I see two possible ways to reduce the overhead of failing requests:
1. My previous proposal, send one request first to setup authentication,
then start pipelining.
2. Add the response with the authentication challenge at the beginning
of the request queue instead of at the end. I doubt this will have a lot
of impact though because most likely the request queue will be empty.

With current patch I get random crashes in svn checkout. The crashes
happen in libsvn_wc/update_editor.c window_handler, where the
handler_baton seems to be corrupt. I noticed it always happens when a
Get request was done twice: - lgo [08/Jul/2007:22:51:42 +0200] "PROPFIND
/ntlmbasic/repos1/!svn/ver/17/trunk/cp2/B/C/versioned.odt HTTP/1.1" 207 1282 - lgo [08/Jul/2007:22:51:42 +0200] "GET
/ntlmbasic/repos1/!svn/ver/17/trunk/cp2/B/C/versioned.odt HTTP/1.1" 200 6349 - lgo [08/Jul/2007:22:51:42 +0200] "GET
/ntlmbasic/repos1/!svn/ver/17/trunk/cp2/B/C/versioned.odt HTTP/1.1" 200 6349

I'm not really sure why the request is done twice, it doesn't happen
with Basic authentication so it'll probably be easy to spot.


To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Mon Jul 9 00:44:28 2007

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.