[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

RE: Speeding up workspace

From: Bert Huijben <rhuijben_at_sharpsvn.net>
Date: Thu, 12 Feb 2009 09:41:53 +0100

> -----Original Message-----
> From: Listman [mailto:listman_at_burble.net]
> Sent: donderdag 12 februari 2009 4:24
> To: Bert Huijben
> Cc: users_at_subversion.tigris.org
> Subject: Re: Speeding up workspace
> On Feb 11, 2009, at 2:34 PM- Feb 11, 2009, Bert Huijben wrote:
> >> -----Original Message-----
> >> From: Justin Case [mailto:send_lotsa_spam_here_at_yahoo.com]
> >> Sent: Wednesday, February 11, 2009 1:54 PM
> >> To: users_at_subversion.tigris.org
> >> Subject: RE: Speeding up workspace
> >>
> >> --- On Tue, 2/10/09, Bert Huijben wrote:
> >>> Subversion locking
> >>> strategy changes in a system that writes less locking files
> >>> (8234 lock files in this case).
> >>
> >> Sorry for the possibly dumb question: do you mean here the file-
> >> locking
> >> by SVN server for its internal needs, or the user locking a file?
> >> I hope it's the first option (then I have hopes for the future)
> >
> > No, and no... yet another kind of lock.
> >
> > I'm talking about the working copy locking (nothing on the server).
> > The lock
> > asking you to do 'svn cleanup' if it isn't removed properly.
> >
> > Before svn update (or another operation) starts doing actual work
> that
> > changes your working copy, it locks your working copy. It does this
> > to make
> > sure that no other subversion clients can update the working copy at
> > the
> > same time. (To avoid corruptions).
> >
> > The current working copy library does this by writing a 'lock' file
> > (literally) in the .svn directory of each and every directory in your
> > working copy.
> >
> > Only after it completes this first step, it starts doing the actual
> > updating: Calculating what should be updated.
> >
> > In this gigantic large working copy I used for testing, this writing
> > of lock
> > files took +- 55 seconds.. The actual update was then just a few
> > seconds and
> > the removing of those 8234 files was almost instant (Not really
> > measurable).
> svn update is very slow with our (very large!) working copies since it
> needs to
> figure out whats changed before performing the update. this is where
> the
> "shall we implement an svn edit command" discussion emanated. Perforce
> keeps all changes on the server thus negating this extra sweep through
> the
> .svn directories in the filesystem.

Have you profiled this to show that the actual calculation is the cause

I assumed it was this calculation while profiling, but just replacing the
locking with a dummy in memory lock (Absolutely not for production ready)
brought that 80-90% performance increase.

Creating a file is no cheap operation, especially on network drives where it
needs to check if it can create the file.. create the file.. then check if
it succeeded. (The 150 files/seconds I measured on Windows could be pretty
fast compared to your situation).

The easiest way to check for yourself is:
$ svn status --show-updates

This does all the work of 'svn update' (calculating what to update) and 'svn
status' (what is dirty locally), but without getting the write lock needed
for the actual updating.

On OS/X this svn status --show-updates is about as fast as svn update on a
big workingcopy, while on Windows it is much slower.

(My guess is that it slow over NFS too).

(Repeating from your mail)
> svn update is very slow with our (very large!) working copies since it
> needs to
> figure out whats changed before performing the update. this is where

Actually it doesn't have to do this... (and doesn't)...

It just has to look if there are switched locations in your workingcopy. If
something has changed (or not) is only interesting when changes are merged
into your working copy.



To unsubscribe from this discussion, e-mail: [users-unsubscribe_at_subversion.tigris.org].
Received on 2009-02-12 23:29:39 CET

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.