> Date: Wed, 25 May 2011 12:17:07 +0200
> From: stsp_at_elego.de
> To: d.s_at_daniel.shahaf.name
> CC: smith_winston_6079_at_hotmail.com; dev_at_subversion.apache.org
> Subject: Re: large number of large binary files in subversion
> On Wed, May 25, 2011 at 01:02:28PM +0300, Daniel Shahaf wrote:
> > Stefan Sperling wrote on Wed, May 25, 2011 at 11:45:21 +0200:
> > > On Wed, May 25, 2011 at 10:03:16AM +1100, Winston Smith wrote:
> > > > Yes, I planned to do that for a read-only backup repository as part of
> > > > various backup schedules (daily, weekly, monthly, yearly).
> > >
> > > Unfortunately there is no incremental hotcopy support yet,
> > > see http://subversion.tigris.org/issues/show_bug.cgi?id=3815
That won't be necessary as per design. A hotcopy is always meant to
be used in a full backup. SVN uses delta-storage, so it *is* effectivly
doing an incremental backup already. All one has to do is to consolidate
these incremental backups into one full backup. Something one would
have to do anyway when recovering if SVN didn't do it.
> > > If you ensure that no commits happen during the backup period you
> > > could use rsync instead.
> > It is not safe to rsync live Subversion filesystems. (the result may or
> > may not be corrupt)
> That's why I said that no commits should happen. But thanks for
> spelling it out more explicitly.
I recently heard of someone who did an rsync of live postgresql databases
as part of his backup schedule, and was wondering why the result is
corrupted... I believe an SVN repo can be put into read-only mode quickly
by linking the pre_commit hook to /bin/false (linking is believed to be
an atomic operation, so no race conditions) and wait a bit to let all commits
finish. Read operations are guaranteed to not alter the repo files. rsyncing
should then be ok, but I still prefer to hotcopy since this is the canonical way
I like canonical and dedicated ways to do things. Rsync seems to be a silver
Received on 2011-05-26 01:06:08 CEST