On Tue, 2004-03-30 at 07:04, Ron Bieber wrote:
> As Linux file systems default to nosynch and I've never had a problem
> with boxes losing data in power failures here at home, I don't see a big
> problem with this as long as the box isn't running an oracle production
> database or something like that. Given the fact that the computer room
> at work has interruptible power and backup generators anyway, I see even
> less of a problem.
The situation is somewhat more complicated than you give it credit for
in your first sentence.
The Linux ext2 filesystem behaves as you say, with completely asynch
writes and no particular attention to ensuring failure atomicity of any
kind. But it does have a pretty sophisticated fsck designed to get a
filesystem back into working order in the face of failures. (It can't
protect against certain kinds of problems, like data blocks containing
data from the wrong files, but at least it won't throw up its hands in
disgust and tell you to newfs.)
Modern Linux filesystems (reiser, xfs, ext3) have asynchronous file
creation but use techniques (careful ordering or journals) to make sure
that the on-disk filesystem is consistent at all times. As a result,
they tend to be a little slower than ext2, but most safer. Most people
who use Linux these days use one of these filesystems.
Solaris with asynch writes has neither the advantage of a sophisticated
fsck (as far as I know) nor the advantage of ordered writes or
journaling. Although you stand a pretty good chance of being okay in
the face of an unclean shutdown, it's not very safe at all.
So, feel free to rely on backups and uninterruptable power; those are
fine things. But I don't think you can rely on the experience of Linux
users.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Tue Mar 30 19:19:07 2004