kfogel@collab.net writes:
> I don't really buy the argument for keeping the logs. Just because
> Berkeley DB happens to offer this incredibly detailed audit and
> recovery trail, doesn't mean we have to take advantage of it. CVS
> doesn't even support such a thing.
The comparison with CVS is unfair. The chances of completely ruining
an entire CVS repository to the point of uselessness are sooooo much
smaller than those for ruining a Berkeley DB environment.
An analogy: Anybody here use DoubleSpace back in the day -- where
you're entire collection of data was effectively archived into one
giant file. And anybody ever get a single bad disk sector that
rendered that giant file -- and hence all your "real" data --
inaccessible?
> What are the circumstances where a regular backup schedule with
> 'svnadmin hotcopy' is insufficient and one would actually need these
> space-eating logs? Are such circumstances common enough to warrant
> turning on the space-eating by default?
Using logfiles alone as a backup mechanism has its pros and cons. On
the "pro" side, you have incremental backup capabilities, with a
(default) granularity of every megabyte of logged data. The obvious
"con" is that reconstruction of a repository from only logfiles must
necessarily be done from the beginning of the thing's life -- and that
could be a whole lotta logfiles to traverse.
Staying focused on this particular tradeoff, the best backup method,
therefore, is to use a combination of periodic hotcopies + logfiles
created/modified since your last hotcopy. But it isn't so black and
white, because our scope isn't strictly limited to these forms of
backup. We also have incremental repository dumps + rev-prop-change
archival, for example, which also has its own set of pros and cons.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Wed Nov 26 19:52:39 2003