[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Re-use connection for svn:externals

From: Ivan Zhakov <ivan_at_visualsvn.com>
Date: Tue, 9 Feb 2010 19:43:48 +0200

On Tue, Feb 9, 2010 at 7:09 PM, Julian Foad <julian.foad_at_wandisco.com> wrote:
> On Mon, 2010-02-08, Phillip Hellewell wrote:
>> I'm not sure if this is related to issue 1448 or not (it's kinda like the
>> opposite of 1448 actually), but I use externals extensively and doing an
>> update is slow.  It appears to be creating a new connection for each
>> external even though it's the same repository on the same server.  Each
>> connection takes about 3 seconds to make.
>>
>> I have a large project (over 10,000 files) that takes about 10 seconds to do
>> an update (with no externals).  Then I have a sibling folder that acts as a
>> "module" to provide a limited view of the project by using relative
>> externals to about a dozen subdirs of the project.  This "module" folder
>> takes 40 seconds to do an update.  So it takes longer to update this
>> "module" folder even though it only contains a subset of the project!  Even
>> individual file externals take 3 seconds each to update.
>>
>> Since all the externals are relative, they are to the same repository on the
>> same server, so can't we re-use existing connection?
>
> Yes, that would be a good enhancement.
>
> There are several places where the client should/could re-use a
> connection. During a multi-target update command like "svn up a b c" is
> another example.
>
> Would you, or anyone you know, be interested in working on it? I would
> be glad to give you some help and guidance.
>
Agreed. svn_client_* functions spend a lot time on creating RA
connection. My idea was to introduce "connection pool" to hold and
reuse RA connections. This pool can be stored in svn_client_ctx_t or
in separate object.

-- 
Ivan Zhakov
VisualSVN Team
Received on 2010-02-09 18:44:25 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.