On Thu, Feb 06, 2014 at 02:11:45PM +0100, Alexander Lüders wrote:
> So after the failed svn delete a subsequent cleanup would try to finish the
> unfinished delete?
Well, here's how the mechanics work in some more detail:
The deletion in meta data doesn't fail. It adds a base-deleted row
for the file to the NODES table in wc.db and also adds a work queue
item which will be run later to delete the on-disk file, once the entire
deletion (which is possibly recursive in the general case) has been
committed to wc.db. If this fails part of the way through for some
reason, all meta-data changes are rolled back, and no change happens.
It's the deletion of the on-disk file that is failing, when the work
queue is run, after all related meta-data changes have been completed.
'svn cleanup' simply tries to run the work queue to get the working
copy into a consistent state. It doesn't know how to undo completed
meta data modifications.
> I just wonder why it's different for a svn commit of a versioned modified
> file. Applying the same access restrictions on a modified file do not result
> in a "locked" working copy after a failed commit. I.e. the commit fails and
> no cleanup is necessary to retry the unfinished commit, which eventually
> would fail again.
File modifications are not managed as part of the meta data.
They are detected on the fly during the commit process by comparing
file timestamps and sizes to values recorded in meta data during
checkout/update, and even comparing file content if necessary.
> I don't understand why it is necessary to retry the failed delete (keeping
> it in the journal) at all.
Because the operations is done in meta data first and then on disk.
Arguably, we could add a check of the on-disk state before we delete
the file in meta data. But implications would have to be investigated
first (whether to completely fail in the recursive delete case,
whether this would hurt performance a lot, etc.).
Received on 2014-02-06 14:48:30 CET