[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

RE: svnadmin obliterate features (was: RE: the obliterate discussion - why dumpfiltering is no validworkaround)

From: Erik Hemdal <erik_at_comprehensivepower.com>
Date: 2007-07-24 18:44:12 CEST

> -----Original Message-----
> From: patrick [mailto:Patrick.Wyss@mobilesolutions.ch]
> Sent: Tuesday, July 24, 2007 11:00 AM
> To: users@subversion.tigris.org
> Subject: RE: svnadmin obliterate features (was: RE: the
> obliterate discussion - why dumpfiltering is no validworkaround)
> Erik Hemdal wrote:
> >
> > It would seem then that following an obliterate operation,
> one would
> > have to invalidate any existing incremental backups and
> maybe all your
> > existing backups too. After all, they depend on a state of the
> > repository that no longer exists. If an admin uses a
> complex backup
> > strategy, he then has to start from scratch with a complete
> backup and
> > restart the backup scheme.
> >
> > If we didn't do that, then we'd have a situation where, in order to
> > restore, one would have to apply some incremental backups up to the
> > point of the obliterate operation, then repeat the obliterate, and
> > then continue with restoring backups (because the structure of the
> > repository is now different). Thinking about "obliteration
> tracking"
> > makes merge tracking seem trivial.
> >
> i'm still not sure if i understand you correctly...
> are you talking about FSFS and backups using a "traditional"
> filebackuptool? if yes, i think there are no problems with
> that. the (rev-)files we change are changed and will be
> backuped in the next incremental/differential backup.

I'm thinking about 'svnadmin dump'. You can perform an incremental dump of
only a range of revisions, which gives you a smaller, faster dump --
provided that you have the repository in the right state to accept an
'svnadmin load' from your incremental dump.

Let's say I have a complete dump at R10, and incrementals for R11-R15. But
I then use the proposed 'svnadmin obliterate' command and remove some files
completely, making some of the revisions different. Finally I take more
incrementals for R16-R20, because that's how my backup scheme works.

If at R20, I have a crash, I can recover through R15. But then what? The
repository is consistent through R15, but does not reflect the obliterate
operation. The incremental dump at R16 assumes that the obliteration has
already happened. It's not clear to me that the recovery can finish without
an external record of the obliteration, so I can repeat it. I think a fresh
complete dump would be advisable, if not mandatory.

Patrick, if by a traditional file backup tool, you mean something like tar,
which backs up the filesystem, then I agree with you. An obliterate
operation would change the filesystem and then your next backup would catch
the changes. But svnadmin dump and the scripts provided for backup are
better to use: they assure that the repository is backed up in a consistent
state, and they create a dump file that is platform independent.

> i have no clue about other mechanisms, don't know what the "official
> released backup scripts" are.
> but i can not imagine anything that would not be messed up by
> dumpfiltering but would fail here.

Agreed. Dumpfiltering has the same issue. I thought the whole discussion
was to come up with a method easier than that, because dumpfiltering is a
little scary.

> > Does it make sense to limit obliterate to just the branch
> you specify?
> > If the file has been copied to another branch, then any cheap copy
> > needs to be made expensive there, or you need to do obliterate
> > operations in the other branches too.
> >
> if "partial" obliteration is needed ("-r LOWER[:UPPER]") then
> this problem needs to be handled.
> either by doing expensive copies or by having the first copy
> as base and further copies be copies of that. i'm not happy
> with either of those. probably best would be to not allow
> partial obliteration of nodes with copies.
> thinking of the usages we discussed earlier
> (confidentiality/disc-space recollecting by deleting full
> branches and/or erroneously added files) i can not see a need
> for keeping unchanged copies of the item we obliterate.

If the usage is to remove illegal, immoral or fattening files, I agree. But
I can envision using such a command to clean up "good" files that are simply
no longer needed, and which don't have any particular audit or retention
demands on them. For example, once I stop supporting a release of my code,
I might like to remove the various branches that went into it. But if I
copied some of those files to start supporting a newer release, then I still
want to keep the copies.

. . . .

> i still think that complete obliteration is the most used form of
> obliteration:
> svn obliterate /etc/passwd
> svn obliterate
> /old/project/neverNeededAnymore/otherwiseItsOnDVD723inTheCellar
> svn obliterate /some/bigFile.mdb
> IMO next important is the problem when in one revision
> something was erroneously added to a file. it would certainly
> be nice to have a solution for this but i think it makes the
> whole thing a lot more complex.
> . . . .
> > If the operation created a new revision of the repository,
> so that I
> > could log the fact that file(s) had been obliterated, that would be
> > good.
> >
> i think we should certainly not renumber revisions. having a
> default message saying what happened would also make sense to me.

Agreed here too. Renumbering revisions is not good. But it would be
helpful if there were a way to document obliteration in the log.


To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Tue Jul 24 18:43:06 2007

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.