On Wednesday, 10. May 2006 01:57, Giovanni Bajo wrote:
> Dirk Schenkewitz <email@example.com> wrote:
> > Could you shortly describe what you tried?
> > I would do the following:
> > A) copy a WC to another name, perhaps 2 times while you're at it.
> > Then do an
> > svn revert to switch back to what should be the latest checked in
> > version if the server would be fine. Then try to checkin. If that
> > does not work:
> > B) checkout the latest version from the server to a NEW directory.
> > Copy everything from the just 'revert'ed WC into it. Check in. This
> > must work.
> > C) at last copy everything from the unreverted copy ontop of it.
> > Check in.
> This won't work because every remote operation doing on the WC requires the
> base revision of the WC itself to exist, and it requires *that* version of the
> repository to actually *match* the text-base contents of the working copy,
> since the server and the client talk through deltas assuming this.
> Say you restored up to r100 from the backups. All the working copies after r100
> are doomed. If you have three working copies pointing at r120, r140 and r160,
> you should first collect them from the users, then restore them with the trick
> you suggested (checkout, copy all the files over, commit) *and* adding padding
> (blank) commits into the repository so that r120 in the server actually matches
> the text-base version of the r120 repository, and so on for all three of them.
> Now think if you 500 working copies around.
Right - without padding/"empty" versions, r120 becomes r101, r140 becomes r102
and so on. But since the intermediate versions are lost anyway, why not live with
the new revision numbers?
> And notice that you're still doomed to fail because the checkout + copy files +
> commit trick does not correctly takes care of the *properties* of the files.
Not if the properties have changes within the missing versions, right.
> If you don't restore the properties, you are in big trouble if they ever change
> again, since the working copies will be desynchronized (eg. they will think
> that a certain file does not have properties, while the server thinks it has),
> so the first time some delta is transmitted, there will probably be some
Yes, the cleanup means a remarkable amount of work for all users.
> Also, another really negative thing is that there's no way of flagging working
> copies as "bad". I told my users to destroy their working copies, but what if
> they forget / do mistakes, and work in a desyncrhonized working copy? They're
> probably bound to go into even deeper trouble. And there's really nothing I can
> do. The only thing I thought of was dumping the whole repository and reloading
> it into a *new* repository, with a new UUID, so that the old working copy
> wouldn't work. But I have other projects in the same repository, and I didn't
> want to invalidate working copies of the other projects too. And I didn't want
> to manage *another new* Subversion server just for this project.
I see. Just getting as much as possible of the history back isn't enough. You're
right. The situation is worse than I thought.
> The net result is that now I have a post-commit hook which dumps and backups
> a revision as soon as it is committed. I believe that any other backup strategy
> for a SVN server (say, hotcopy once a day) is basically useless.
I have something similar, a script that runs as cron-job every 10 minutes and
checks if something changed, if yes, it makes a hotcopy. Every 30 minutes the
backup server logs in and copies the new hotcopies. But if worse comes to worst,
that may be not enough. I had some difficulties with running scripts from a hook
but maybe I should try again, harder.
Thanks for sharing!
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Wed May 10 13:48:27 2006