this is from the svnbook the text from below descripes my
experience very good
one single revision dump took ~1day creating a ~80G gzip 9 dump
from a 7 G repository
P.S. the --deltas option does not count because I took ony one
single revison -> no history
the creat problem is creating tags with svn copy
just to be clear one single file in different repository trees
created by svn copy will be
dumped for every tree with full-text
in my count, 5G head tree x ~200 tags -> 1TB dump ....for one
$ svnadmin create newrepos
$ svnadmin dump oldrepos | svnadmin load newrepos
By default, the dump file will be quite large—much larger than the
repository itself. That's
because by default every version of every file is expressed as a full text
in the dump file.
This is the fastest and simplest behavior, and it's nice if you're piping
the dump data
directly into some other process (such as a compression program, filtering
loading process). But if you're creating a dump file for longer-term
storage, you'll likely want
to save disk space by using the --deltas option. With this option,
successive revisions of
files will be output as compressed, binary differences—just as file
revisions are stored in a
repository. This option is slower, but it results in a dump file much
closer in size to the
We mentioned previously that svnadmin dump outputs a range of revisions.
Use the -
-revision (-r) option to specify a single revision, or a range of
revisions, to dump. If you
omit this option, all the existing repository revisions will be dumped.
ISV13 - Systemverantwortung Leben
VersIT Versicherungs-Informatik GmbH
Gottlieb-Daimler-Str. 2, 68165 Mannheim
Registergericht: Mannheim HRB 6287
Vorsitzender der Geschäftsführung: Claus-Peter Gutt
Marc Haisenko <haisenko_at_comdasys.com>
Re: Antwort: Re: Problem using subversion on a large repository
On Friday 24 October 2008 13:37:33 Andreas.Otto_at_versit.de wrote:
> thanks for the answer ... now my results ..
> -> yeah you are right Repository size is right now:
> 7270760 K -> 7G
> I try hard to solve this Problem -> now I present my experience:
> 1. this doesn't work:
> ->svnadmin dump -r $(svnlook youngest $OLD) $OLD |
> svndumpfilter exclude mytags | svnadmin load $NEW
> the reason is to delete the huge "mytags" tree created
> with "svn copy ..." with dumpfilter !after! "svnadmin dump"
> was unuseable slow because before deleting you have to
> dump everything ~1TB (just a assuming) before you can filter
Your assumption is wrong, the dump is a special represention of your
repository database. You seem to assume that for example if you have one
directory with 1GB and made 10 branches that your dump will now be 10GB in
size. This is not the case as a branch is only a few bytes in size.
> 2. the solution was to delete the whole tags directory first
> -> svn delete ....
> and than using the command above to create the new
> repository, but i have to say:
> -> createing tags with "svn copy ...." is a design error
> nice for small projects but not useable in a big
> environment !!!
Please describe this point in more details why you think so.
Projects like GCC, KDE and Apache would certainly say that it DOES work
for huge projects and I am willing to bet money on the fact that your
repository does not even REMOTELY reach their size (e.g. as I write this,
is at revision 876044 and I know it's bigger than 34GB (that was their
last december)). With a trunk check-out size of 2GB (AFAIK) you can see
this is very efficient given the fact that they also have huge number of
branches and tags. Have a look yourself: http://websvn.kde.org
> 3. I just support the "oldstyle" position that
> -> svn dump OLD | svn load NEW
> should create a repository NEW with close the same size
If you update from an older SubVersion to a newer version it is even
that your new repository is SMALLER than the older due to improved
It will be bigger with 1.5 however due to improved merge-tracking.
Rüdesheimer Str. 7
Tel.: +49 (0)89 548 433 321
Received on 2008-10-27 11:15:23 CET