[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: [PATCH] Updating exclusively opened files locks working copy

From: Philip Martin <philip_at_codematters.co.uk>
Date: 2006-08-02 03:26:34 CEST

"D.J. Heap" <djheap@gmail.com> writes:

> Yes, there may be a better way to handle the problem, but I'm not
> terribly familiar with the working copy and log running code. And
> that is correct -- this is just attempting to detect the situation
> where other applications already have files open and locked that we
> need to update, preventing us from updating them. In the distant
> past, failed updates like this didn't cause the working copy to end up
> locked -- but now it does.

If log files fail then the working copy remains locked. If it wasn't
locked in the past it probably means that some operation that wasn't
loggy has become loggy. (I seem to remember something about the first
command in a log file being special, i.e. if the log file had done
nothing then it could be removed, but on first glance that doesn't
seem to be present any more).

> As you said, if an application were to grab the file between this
> check and the log running code, it would still fail -- but what
> happens 95% of the time (in my experience) is that people have an
> application (editor of some kind, especially with binary files)
> running that is holding the file locked when they run update. The
> update fails, they remember they need to close the editor/application,
> but now their working copy is locked and they have to cleanup and then
> update again.
>
> In the current implementation, when the update editor is running, it
> doesn't actually do much with the real file (just a stat, I think)
> until close_dir time when it is processing the log file and tries to
> really update the file. It fails at this point, and I didn't see any
> way to gracefully recover -- all error handling just seems to leave
> the directory locked and force a 'cleanup' run later. Is there
> something better we could do at that point?

By design.

> Even this patch should probably really be tweaked to open the file in
> read-write mode (to be a truer check) and that would require setting
> the file to read-write -- but we need to do that later, anyway, so I
> guess that's not too much of an issue.
>
> I'd be happy to handle it better in the log running code, though, if
> we can. Or maybe update should just run cleanup automatically or
> something?

The whole point of cleanup is that it's not supposed to be run
automatically, something has gone wrong that requires user
intervention. In this case running cleanup automatically would surely
hit the same file locked problem?

Your solution might be the best way to do it, although if you care
about the behaviour you should write a regression test as it's easy to
break corner cases like this.

Alternatively perhaps you could extend the log file processing so that
it does all the file handling while writing the log file, then it will
fail before running the log file.

-- 
Philip Martin
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Wed Aug 2 03:26:59 2006

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.