[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Svn scaling issue

From: <kfogel_at_collab.net>
Date: 2005-01-14 18:37:56 CET

Could you give an exact transcript of the operations you tried, and
how long they took, and anything else you noticed? I just want to
make sure this really is the same problem as issue #1452.

Thanks,
-Karl

"Arlo Belshee" <arlo@criticalpath.com> writes:
> I'm trying to use Subversion with a client's legacy codebase. Performance is
> unacceptable, and it all has to do with client-side locking. This is, I
> think, related to Issue 1452.
>
> This codebase has over 25,000 directories.
>
> Even a real good dev box with a fast hard drive takes 10-15 minutes to
> perform a lock operation - before it tries to actually do anything. Of
> course, it then takes a similar amount of time to unlock everything when it
> is done - even if the action is cancelled.
>
> Although the operation itself may scale only with the amount of change, the
> locking scales with the size of the total repository. Each time that we
> branch the code, the problem gets worse (that is, if we ever check out the
> branch).
>
> Some operations can be done in subdirectories. Those then go quickly.
> However, this codebase is poorly organized, so most changes are endemic.
> Thus, all updates always have to be from the top of the tree. They take 30
> minutes to tell you that the current working set is up to date.
>
> It seems to me, as one unfamiliar with the gory details, that a simple,
> constant-time locking operation is possible. It would just lock, at the
> beginning of the operation, the top of the tree which it is operating on. It
> would not note any lock in any subdir. When checking for locks, again only
> at the beginning of an operation, the system would simply walk up the tree,
> checking for a lock at each encountered directory.
>
> The problem that I see is if I operate on a subdir, then do an operation on
> a parent of it, before the subdir operation completes. This, however, could
> be fixed by a slight modification: define two types of locks. The first
> indicates that the tree from there down is locked. The second is a warning -
> it indicates that there is a locked subtree in the current tree. The locking
> algorithm, then, would be something like:
>
> 1. Search from the current directory to the root. For each directory, see if
> there is a lock. If so, the system is already locked with another operation
> (exit algorithm).
> 2. If you don't encounter a lock, then set a warning in every directory
> traversed (from current up to root). Set a lock in current.
> 3. Perform operation.
> 4. Remove the lock. Remove all warnings. *
>
> * In order for this to work, you'd need to be able to allow multiple-warning
> for a directory. That way, if you are doing two operations in subdirs of a
> directory A, they could both independently increment the "warning level" for
> dir A. When they finish, they both decrement the warning level. Thus, the
> warning remains active until all have completed.
>
> I am, of course, not going to write this myself. However, I'd really
> appreciate it if some solution to the long locking problem were added to
> Subversion. As it is, using Subversion on our LAN is only marginally faster
> than using Perforce across the internet.
>
> Thanks in advance for any improvements you can make to the lock scaling
> problem.
>
> Arlo
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
> For additional commands, e-mail: users-help@subversion.tigris.org

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Fri Jan 14 18:59:38 2005

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.