On Sat, 02 Jul 2005, Saulius Grazulis wrote:
> On Friday 01 July 2005 21:17, kfogel@collab.net wrote:
>
> > Well, until a checkpoint happens (i.e., data is sync'd from log file
> > to database file), the logfiles hold the only copy of actual data. So
> > BDB's sensitivity to logfile corruption is understandable! :-)
>
> To me, this explanation does not sound convincing. Maybe my understanding of
> databases is a bit naive, but shouldn't a serious db that uses transactions
> (and a believe bdb uses them) write a transaction first and only then set a
> bit "transaction finished" in a most-possibly-atomic way? In such setting,
> during a crash one should only loose the last transaction (which for svn
> would mean the last checkin), and end up with the database that is fully
> usable and just misses that last portion of data. What is wrong in this
> picture?
I'll chime in, as a DB junkie. a 'serious' DBMS attempts to be 'atomic',
which is what you're talking about. A transaction either happens, or it
doesn't, come hell or power failure. Implementation-wise, this has
translated into the notion that you record logs first - say what you
meant to do, before you do it, that way if you're interupted, you can
replay the logs.
Now, say, Oracle, plays all sorts of games on every platform they
support to ensure that data gets to disk. The best they can do is to
make sure it gets to the RAID controller/ATA drive/SCSI disk/MD virtual
device cache. Last I heard, they have something like 8000 engineers.
Granted, not all working on the DB. And Larry still peddles "bare iron"
from time to time, attempting to become an OS. And I haven't started
talking about bugs or "unimplemented features". This code base is coming
on 30 years old.
And oracle DBs still become corrupt. If you don't think it is common,
ask me about a mysterious fuc^H^H^Hfailure in 10G bitmap indexing for
which I spent last week trying to correct, and which also baffled the
good people at Oracle support. Not cool. (The best we settled on is,
"if it hurts when you poke there, don't poke there. A patch is in the
mail.")
Granted, BDB is a somewhat more constrained environment, in comparison.
That doesn't constrain the problem set as much as you'd think. If you're
interested, hang out on postgres-dev a bit - I think those folks have
the problem set down, and are doing about the coolest stuff around at
the moment, at least for values of cool, where cool = (cool + in
public).
> So what makes bdb-using-svn rather sensitive to unanticipated breaks in
> processing? Actually, I was always thinking that the only purpose of having a
> sophisticated db managment system is to provide higher reliability that a
> plain filesystem does, not lower...
I believe the issue is in the specific way svn is (ab)using BDB. BDB is
pretty robust, for what it is. For the record, I don't know what,
exactly, the issue is (I'd be very curious to find out). But much like
Oracle, or Postgres, I imagine there's no end to the ways in which you
can abuse it. Using The C libs to interface with any of them, there
are countless ways you can hose a DB. (without even trying: you should
see some of the cool "don't do that" things I learned with the OCI docs
and a compiler. Give me a hammer, and even though I don't see thumbs,
I'll still hit them.) I've never written code against BDB, but I can't
imagine it is any different.
-j, not an svn developer, but very sympathetic.
-- Jamie Lawrence
jal@jal.org "The sign that points to Boston doesn't have to go there."
- Max Scheler
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Sat Jul 2 16:00:10 2005