Branko =?iso-8859-2?Q?=C8ibej?= <firstname.lastname@example.org> writes:
> Karl Fogel wrote:
> > ? Sorry, now I'm confused. I wasn't proposing that. I'm simply
> > saying that a) getting the data to back up is a read-only operation
> > and b) the filesystem already handles reads correctly, there is no
> > need to worry about locking.
> > "Jonathan S. Shapiro" <email@example.com> writes:
> > > So now you propose to write a custom backup tool. This strikes me as
> > > something that people are really unlikely to get installed correctly.
> > > There's nothing wrong with the idea, but real-world administrators always
> > > have negative cycles available.
> > >
> > > Jonathan
> Usually, backup tools read ordinary files from disk, which is a bit
> different from reading out of svn_fs, right? So Jonathan has a point,
> I think.
I'm pretty sure just copying files in the simplest way can corrupt a
running Berkeley DB database, if the log file happens to get archived
first, before the database file, so the database reflects transactions
absent from the log file. If the log file gets archived after the
database file, it's fine. The recommended Berkeley DB archival
procedures pretty much amount to just that: copying the DB first, then
the log files. They have a program which helps you select which log
files to copy, but that's it.
I would love to see a filesystem design that gives us the promises we
want to make our users (transactions, recoverability), assuming only
a). And while we're at it, let's throw in NFS friendliness, too. But
I think it might be challenging. Concrete, detailed suggestions only,
please. Does DCMS solve this in some nice way?
Received on Sat Oct 21 14:36:12 2006