Subversion still reports the working copy as ‘locked’ in this case, just like Subversion 1.0-1.6 reported this case when we still used 'loggy’ operations. So there is still a quite visible status that tells you that there is something to be done.
(Every directory shows as status write locked; not unmodified or something)
The change in 1.7 was an implementation detailed that caused us to break operations that opened the database… But if your client opened it earlier (or while there were temporarily no workqueue items… as happens every time the workqueue is run), it could just open it… and use it during all future operations.
Blocking out just initial db opens, but not blocking new operations when a db is open (e.g. when cached in svn_client_ctxt’s svn_wc__db_t instance) is in my opionion more inconsistent than the current behavior which does a check on obtaining the write lock.
From: 'Evgeny Kotkov'
Sent: Wednesday, May 13, 2015 5:23 PM
To: Bert Huijben
Cc: Ivan Zhakov, dev_at_subversion.apache.org, Stefan Sperling
Bert Huijben <bert_at_qqmail.nl> writes:
> In Subversion 1.7 we queued database operations that contained further
> operations that would change the database further. This left the database in
> an inconsistent, mostly unsupported intermediate state, which would provide
> invalid results on certain operations. The cleanup was really required to
> make the database consistent again.
> All these problems were resolved for 1.8.0, but for 1.8 we didn't remove the
> restriction while we could have done that (stsp wrote that patch some time
> after 1.8).
> The interesting cases are things like recursive base-deletes (part of
> update processing). In 1.7 that operation is done as many recursive db
> transactions, which each schedule workqueue items. Since 1.8 there is a
> single db operation that schedules all workqueue operations, and can never
> leave the database with a half a base-delete.
Thank you for the explanation. It's good to know that the consistency of the
database with a non-empty workqueue is not a problem.
> When we moved this check I did another carefull check to see if all wq
> operatios were safe (and I checked the wq operations again short before
> branching 1.9). I don't see actual cases why we would really need to block
> out read-only operations any more.
However, there is the user experience part of the issue. Say, after an
aborted write, we are left with a database having a non-empty workqueue.
As per above, its integrity is not a suspect, but performing a read-only
operation such as 'svn st' could yield incomplete results due to these
unprocessed items in the workqueue.
As I see it, the problem is that if we no longer throw an error in this case,
we're potentially telling a bunch of lies to the end-user — or maybe to an
automated CI builder that relies on this output. Furthermore, there is no
way of finding it out, and this fact will only show up upon the next database
write, that may or may not happen at all. In 1.8.13 everything is simple —
if you try to do anything with a stale workqueue, you get an error and have
to run 'svn cleanup'.
I am not sure if there are other benefits in the current behavior, but I think
that if we have to choose between not fixing issue #4390 in 1.9.0-rc1 and
potentially misleading users during common read-only operations, the first
option is better.
Am I missing something crucial?
Received on 2015-05-13 18:51:13 CEST