[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Can Subversion work well with very large data sets?

From: Nico Kadel-Garcia <nkadel_at_gmail.com>
Date: Sun, 10 Oct 2010 16:04:14 -0400

On Sat, Oct 9, 2010 at 5:58 PM, Stefan Sperling <stsp_at_elego.de> wrote:

> It sounds like your main problem is local operations in the working
> copy (i.e. disk i/o).
> Could you do another evaluation of Subversion when we start issuing
> beta releases of Subversion 1.7 probably at the end of this year?
> We're trying to address scalability problems in the 1.6 working copy
> implementation in 1.7 -- the entire working copy handling code has been
> rewritten for this and other purposes. Furhter performance improvements
> are planned for 1.8 and later (some have already been implemented on a
> branch that's going to be completely reviewed and merged after 1.7 release).
>
> For 1.6, you should really consider running the client on Linux.
> Subversion has been reported to be up to 10 times faster on Linux
> than on Windows.

I'm someone who reported such issues. The issue wasn't so much Linux
versus Windows, but CIFS for checking out working copies on a shared
directory, versus using local disk or NFS. I've since noted that doing
it with CIFS shared on Linux also seriously bites.

> You should also try Mercurial if you haven't already. It should match
> your criteria no worse than Subversion does.
>
> Thanks,
> Stefan

Or git. The git-svn tools are handy for committing local working copy
changes and forcing submissions to an upstream, master repository only
when ready.

Every significantly sized source control system has web gui's, such
viewvc for Subversion and CVS and "webgit" for git.
Received on 2010-10-10 22:04:55 CEST

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.