[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

RE: svnadmin obliterate features (was: RE: the obliterate discussion - why dumpfiltering is no validworkaround)

From: patrick <Patrick.Wyss_at_mobilesolutions.ch>
Date: 2007-07-24 16:59:31 CEST

Erik Hemdal wrote:
> It would seem then that following an obliterate operation, one would have
> to
> invalidate any existing incremental backups and maybe all your existing
> backups too. After all, they depend on a state of the repository that no
> longer exists. If an admin uses a complex backup strategy, he then has to
> start from scratch with a complete backup and restart the backup scheme.
> If we didn't do that, then we'd have a situation where, in order to
> restore,
> one would have to apply some incremental backups up to the point of the
> obliterate operation, then repeat the obliterate, and then continue with
> restoring backups (because the structure of the repository is now
> different). Thinking about "obliteration tracking" makes merge tracking
> seem trivial.
i'm still not sure if i understand you correctly...
are you talking about FSFS and backups using a "traditional" filebackuptool?
if yes, i think there are no problems with that. the (rev-)files we change
are changed and will be backuped in the next incremental/differential

i have no clue about other mechanisms, don't know what the "official
released backup scripts" are.
but i can not imagine anything that would not be messed up by dumpfiltering
but would fail here.

> Does it make sense to limit obliterate to just the branch you specify? If
> the file has been copied to another branch, then any cheap copy needs to
> be
> made expensive there, or you need to do obliterate operations in the other
> branches too.
if "partial" obliteration is needed ("-r LOWER[:UPPER]") then this problem
needs to be handled.
either by doing expensive copies or by having the first copy as base and
further copies be copies of that. i'm not happy with either of those.
probably best would be to not allow partial obliteration of nodes with

thinking of the usages we discussed earlier (confidentiality/disc-space
recollecting by deleting full branches and/or erroneously added files) i can
not see a need for keeping unchanged copies of the item we obliterate.

i probably subconsciously;-) included these thoughts when i first formulated
my "simplest version" scenario (total obliteration including ancestors an

> I don't know if this makes the task easier/harder/impossible. My point is
> that even if obliterate did not follow all the file history, but simply
> obliterated and updated the repository in specific branches, it would be
> very helpful.
i still think that complete obliteration is the most used form of
svn obliterate /etc/passwd
svn obliterate
svn obliterate /some/bigFile.mdb

IMO next important is the problem when in one revision something was
erroneously added to a file.
it would certainly be nice to have a solution for this but i think it makes
the whole thing a lot more complex.

> If the operation created a new revision of the repository, so that I could
> log the fact that file(s) had been obliterated, that would be good.
i think we should certainly not renumber revisions. having a default message
saying what happened would also make sense to me.

View this message in context: http://www.nabble.com/the-obliterate-discussion---why-dumpfiltering-is-no-valid-workaround-tf4116918.html#a11764992
Sent from the Subversion Users mailing list archive at Nabble.com.
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Tue Jul 24 16:58:31 2007

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.