[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: over-tolerant code in run_file_install?

From: Mattias Engdegård <mattiase_at_bredband.net>
Date: Tue, 21 May 2013 23:58:49 +0200

21 maj 2013 kl. 12.02 skrev Bert Huijben:

> Before we had a single DB we stored the workqueue items per
> directory. (We
> had a wc.db per directory, just like the old entries). If the
> workqueue
> contained an item that specified to install an item in the directory
> we
> could safely assume that the directory existed. (Otherwise it would
> certainly not contain a .svn/wc.db)
>
> During release testing of 1.7 we encountered more missing directory
> scenarios then our testsuite tried.
> (It was designed in a pre single-db world where this was impossible)
>
> So we added this automatic creation of directory.

Thanks for the kind explanation. The automatic directory creation does
not appear to apply to special files, but perhaps that wasn't
considered important.

> The workqueue implements a different way of atomic behavior: we
> store the
> operation until it succeeded. Every operation can be retried
> infinitely. And
> only after it succeeded and the operation is marked completed we go
> to the
> next operation.

But the purpose of the automatic parent directory creation is to avoid
failing altogether, right? If it were acceptable to let the user
repair the tree and then retry the workqueue operation, then you
wouldn't have bothered adding that code in the first place.

> We can certainly optimize the single copy case (but that fix would
> really
> belong in APR, our platform abstraction layer), but then for many
> cases we
> would have to introduce another copy.

Why would we need to introduce another copy if we want to make a
common file copy operation faster? Please explain.
Received on 2013-05-21 23:59:25 CEST

This is an archived mail posted to the Subversion Dev mailing list.