[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Poor performance for large software repositories downloading to CIFS shares

From: Yves Martin <ymartin59_at_free.fr>
Date: Wed, 14 Jul 2010 18:24:37 +0200

On Tue, 2010-07-13 at 20:40 -0400, Nico Kadel-Garcia wrote:

> Well, yes, except that updating an "export" can't be done since it
> will lack the rest of the .svn information. The point is that they can
> download an up-to-date working copy directly, rather than over the
> poor performance of the CIFS share.

So why are your users unable to access directly to the Subversion
repository either with http(s) or svn protocols ?

> > I have seen 1 Gb working copy properly checkouted on a local disk.
> > When the working copy is there, just use "update" and "switch" to limit
> > transfer and disk writes... Why doing a new checkout each time ?
>
> And that actually works. There are problems with this approach: this
> local disk is inaccessible from other working systems without serious
> crossmounting craziness, is not workable for high availability
> services, and causes any local modifications that haven't been checked
> in to be lost when switching to another system.

Do I guess you try to prevent a work-day job loss by such a complex
system ? I think it is cheaper and more comfortable to setup RAID-1
disks on workstation...

If you want your user to commit to the repository regularly (twice a day
for instance even when code does not compile), maybe an option is to
make them commit their work in individual branches which are merged when
job is over.
Received on 2010-07-14 18:28:21 CEST

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.