On 7/8/06, Nico Kadel-Garcia <email@example.com> wrote:
> gmu 2k6 wrote:
> > I'm experimenting with different filesystems (linux fs, not svn
> > backend) to see which might be the best option for one or more SVN 1.3
> > repos I'm going to host.
> > the data to be committed to SVN consists of the following top-level
> > directories which are spread aroung 2 or 3 repos in CVS right now:
> > src 1GiB
> > res 3.3 GiB
> > projX 780MiB
> > projXsetup 230MiB
> > all sizes are the space used on a local disk after checking it out
> > from CVS of course and not the size on the CVS server.
> > we will commit all existing data from a fresch CVS-checkout to SVN and
> > use a read-only CVS server for history. this is done so that we start
> > anew and get rid of the accumulated history no one actually needs and
> > if one needs it she can use the read-only CVS server.
> > ---- server-info
> > storage: HP SmartArray 6400 RAID 1+0 with four primary partitions for
> > trying out four different linux file system configurations in parallel
> > cpu: one or two Xeon 3GHz
> > ram: 4GiB
> > distro: Debian Testing
> > linux: >= 2.6.15
> > the partitions I have created so far are:
> > p1: ext3 dir_index, sparse_super
> > p2: ext3 dir_index, sparse_super, largefile
> > p3: ext3 dir_index, sparse_super, largefile4
> > p4: <empty>
> > AFAIK the FSFS backend will create one file per changeset so ext3's
> > dir_index might be of help but I'm not sure how to tackle the problem
> > that changesets are normally really small but can be quite big with
> > binary files. choosing the best block size with ext3 for this pattern
> > is hard. maybe Reiser3 or XFS might be a better fit, any opinions with
> > good reasoning might be useful.
> Don't hurt yourself trying to over-tune the system. Seriously, the default
> ext3 settings with dir_fs for directories or database files that may number
> in the many thousands in a single directory as they accumulate is plenty,
> and I've found ext3 to be more reliable when hardware begins to fail than
well, I might still try XFS as it is told to cope better with hardware
failure, though the diskarray should take care of that in the first
place :). it might not hurt to compare ext3 and XFS.
> > the second big question I have is whether there is a performance
> > problem with stuffing all of the dirs as outlined above into one repo
> > or using separate repos. when using svnserve without ssh and many
> > repos this would of course mean that I have to maintain multiple
> > access-control configs and sync the password files. therefore for
> > creating multiple repos to be used by the same groups of devs it may
> > be best to use svn+ssh and rely on xattr or go with WebDAV although I
> > really want to avoid Apache for security and performance reasons.
> Hah. I've dealt with this. Welcome to the world of Apache password files,
> which can be entirely shared for a master directory, and the additional
> layer of user access avaiable through svnperms.py and svnperms.conf, which
> can be symlinked into every repositoriies configuration.
now that I've re-read the 1.3 release-notes: can't I just do this with
a shared authz-db file?
> > the third question mark in my head is: what setup do people like
> > apache.org, kde.org and other big projects with many binary files and
> > text files use?
> Sourceforge: lots of projects, many of which are huge but most of which are
> functionally distinct from each other and have their own codebases.
yeah, but they all use separate repos, as does most probably Apache too.
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Sat Jul 8 15:37:20 2006