[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: [PATCH] Updating exclusively opened files locks working copy

From: D.J. Heap <djheap_at_gmail.com>
Date: 2006-08-01 02:01:02 CEST

On 7/30/06, Philip Martin <philip@codematters.co.uk> wrote:
> Eric Hanchrow <offby1@blarg.net> writes:
>
> > I know precisely nothing about APR programming, so forgive me if this
> > is stupid, but: doesn't this leak a file handle?
> >
> > + /* Make sure the file is not exclusively opened by someone else. */
> > + if (!adding && entry && entry->schedule != svn_wc_schedule_delete)
> > + {
> > + apr_file_t* file;
> > + SVN_ERR(svn_io_file_open(&file, fb->path, APR_READ, APR_OS_DEFAULT,
> > + subpool));
> > + }
>
> The file will be closed when the subpool is destroyed and that happens
> immediately after this code. Once the file has been closed it might
> then get locked and so the problem has not been eliminated but simply
> reduced. It's a bit like doing stat() before open() to check whether
> the file exists, it's not a good solution since open() still has to
> handle errors.
>
> In this case it would be better to improve the error handling that is
> going wrong, although the original report is short on details so it's
> hard to know what to suggest and log handling was mentioned which can
> make things tricky. If it turns out that the patch is the best way to
> do things, then it needs a much better comment explaining what is
> going on. One further disadvantage of the patch is that the code for
> the "solution" is a long way from the code that causes the "problem"
> and there is no regression test.
>
> --
> Philip Martin
>

Yes, there may be a better way to handle the problem, but I'm not
terribly familiar with the working copy and log running code. And
that is correct -- this is just attempting to detect the situation
where other applications already have files open and locked that we
need to update, preventing us from updating them. In the distant
past, failed updates like this didn't cause the working copy to end up
locked -- but now it does.

As you said, if an application were to grab the file between this
check and the log running code, it would still fail -- but what
happens 95% of the time (in my experience) is that people have an
application (editor of some kind, especially with binary files)
running that is holding the file locked when they run update. The
update fails, they remember they need to close the editor/application,
but now their working copy is locked and they have to cleanup and then
update again.

In the current implementation, when the update editor is running, it
doesn't actually do much with the real file (just a stat, I think)
until close_dir time when it is processing the log file and tries to
really update the file. It fails at this point, and I didn't see any
way to gracefully recover -- all error handling just seems to leave
the directory locked and force a 'cleanup' run later. Is there
something better we could do at that point?

Even this patch should probably really be tweaked to open the file in
read-write mode (to be a truer check) and that would require setting
the file to read-write -- but we need to do that later, anyway, so I
guess that's not too much of an issue.

I'd be happy to handle it better in the log running code, though, if
we can. Or maybe update should just run cleanup automatically or
something?

DJ

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Tue Aug 1 02:01:40 2006

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.