This brings up a question for me. I have a couple of repos that are over 5 years old and reaching close to 400GB of storage. I'd like to "trim" the first couple of years of versions and store them to some sort of "archive" repo and keep the most recent versions in an "active" repo. I've been toying with export commands but haven't had any success. I would like to back us away from any possible limits.
Cheers,
Thomas
From: kmradke_at_rockwellcollins.com [mailto:kmradke_at_rockwellcollins.com]
Sent: Friday, September 28, 2012 10:52 AM
To: CHAZAL Julien
Cc: users_at_subversion.apache.org
Subject: Re: Subversion limits?
> I manage a Subversion server that has the following configuration :
> - SVN 1.6.9
> - FSFS storage mode
> - Apache + mod_dav + subversion modules
> - Linux Suse Enterprise Edition 32-bit
>
> On this SVN server, there are around 1100 SVN repositories for
> around 2000 users. I have small repositories and also very heavy
> repositories (the heaviest weighs around 33 GB on my linux
> filesystem). The sum of my repositories weighs around 1TB.
>
> Do you know if there is a size limitation for a SVN repository in Subversion?
> Do you know if there is a number limitation for SVN repositories on
> a Subversion server? Does-it really decrease performances on the
> subversion server?
This really depends upon the hardware and how the users are using
the server. That said, the largest server I have has 1800
repositories serving around 6500 users. The largest repository
is around 400GB with around 7TB of total storage. The largest
single commit I have seen is around 53GB.
The larger repositories get, the longer it may take to do
maintenance activities such as verifying, filtering, dumping,
and loading a repository. This is why I'd recommend staying
away from large repositories and large commits, but they do work.
Subversion seems to be I/O bound, even on a high-end SAN. 1.7
seems to definitely chew more CPU and memory though. But, I've
also seen multiple 1GB NICs near saturation on the server too...
Things that can kill performance:
- Slow filesystem I/O
- Poorly written hook scripts
- Commits with large numbers of files (1M+)
- Lots of files locked (hundred of thousands+)
- Slow authentication servers
You could easily run into issues depending upon the filesystem
type and how you have organized the repositories. For example,
one large partition *might* be less efficient.
Kevin R.
Received on 2012-09-28 21:47:05 CEST