[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: How Big A Dump File Can Be Handled?

From: Nico Kadel-Garcia <nkadel_at_gmail.com>
Date: Wed, 21 Aug 2013 18:09:51 -0400

I would never do a transfer like this without a copy of the dumpfile
available, for reference. The pain of having to re-run the dump later,
especially if there are any bugs in the "svnadmin load" configuration,
normally justifies keeping the dump around until well after the migraiton
is completed.

On Tue, Aug 20, 2013 at 10:11 PM, Ben Reser <ben_at_reser.org> wrote:

> On Tue Aug 20 16:44:08 2013, Geoff Field wrote:
> > I've seen some quite large dump files already - one got up to about
> 28GB. The svnadmin 1.2.3 tool managed to cope with that quite
> successfully. Right now, our largest repository (some 19,000 revisions
> with many files, including installation packages) is dumping. In the 5300
> range of revisions, the dump file has just passed 9GB.
>
> Shouldn't be a problem within the limits of the OS and filesystem.
> However, I'd say why are you bothering to produce dump files? Why not
> simply pipe the output of your dump command to a load command, e.g.
>
> svnadmin create newrepo
> svnadmin dump --incremental oldrepo | svnadmin load newrepo
>
> You'll need space for two repos but that should be less than the space
> the dump file will take. I included the --incremental option above
> because there's no reason to describe the full tree for every revision
> when you're doing a dump/load cycle. You can save space with --deltas
> if you really want the dump files, but at the cost of extra CPU time.
> If you're just piping to load the CPU to calculate the delta isn't
> worth it since you're not saving the dump file.
>
Received on 2013-08-22 00:10:24 CEST

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.