Wadsworth, Eric (Contractor) wrote:
>It's not really about how much hassle it is, but rather who has the hassle.
>Do we want the users of subversion to have the hassle of manually
>synchronizing hundreds of working copies each time a repository has to be
>recovered, or do we want the developers to have the one-time hassle of
>writing code to automate the process?
I think we're talking about a slightly different frame of mind here.
Using Subversion is supposed to mean you don't have these problems to
My working copy has broken a couple times, due to problems with failed
commits. These (so far) have been the result of known bugs that are
being or have been resolved. None of them have compromised the
repository for me. So when my working copy breaks, all I do is check out
a new working copy and delete the old one.
The mind shift is seeing your working copy as a temporary view of some
slice of the repository, NOT as a repository that needs to be kept in
sync with the main repository. Why do you need to synchronize hundreds
of working copies? I mean, users have to do that every day, using
Update, by the very nature of revision control systems and working
copies--why worry about one particular working copy, when it's so easy
to get a new one?
>Agreed, ideally this would never happen! But realistically, we have
>power-outages that corrupt hard drives, hackers who delete files, inept
>sysops who screw things up, and the possible *gasp* bug in a program
>somewhere that results in a corrupt repository.
These are exactly the issues Subversion is there to address. Notice all
the features built in, which prevent partial commits--should the power
go out during a commit, the commit fails. When the server is back up
again, users commit and it works. There's rarely a need to even run
svnadmin recover on the repository.
If viruses/hackers get in, presumably they may infect or compromise the
HEAD of the repository--it would take some doing to go through and
affect previous versions of your repository. Revert to an earlier
version and you're done.
Corrupt hard drives/deleted db files? Well, yes, that's a problem, but
that's what you make back-ups for, and if you're getting a lot of these,
you should be running some sort of redundant system, such as RAID.
But if you're restoring from a backup, I don't see how that breaks the
working copies. Seems to me the only real issue is if you're changing
the layout of a repository, pulling some branch out to put in its own
repository. And if you're doing that, don't your users need to know
about it anyway? Again, the solution is to just check out a new working
>It's pretty well understood that when everything is down (as when a
>repository is recently restored) that there is a lot of pressure on the
>sysadmins to get it back up. This pressure can lead to mistakes, especially
>when there are a lot of manual steps, and a lot of mental grunting to figure
>out how to restore a usable system. Automation, my friends!
Ditch your working copy, check out a new one. It's hard to get simpler
than that! If your users have working copies that are a lot different
than the repository, maybe you should spend your energy educating them
about committing their changes more frequently!
>(For the record, I think subversion is an awesome tool.
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Thu Aug 14 23:22:29 2003