[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

merge-tracking lookup can fail to get a read lock

From: David Glasser <glasser_at_mit.edu>
Date: 2007-06-07 17:59:55 CEST

[Disclaimer: I haven't been following merge-tracking development much
until recently; I didn't find any discussion of this issue in my
archives, but this may already be known.]

My impression was that the sqlite3 mergeinfo database is intended to
follow the same locking rules as fsfs: reads never need to obtain a
lock. Our current usage of sqlite3 does not seem to work that way.
While there are no consistency problems, any of the calls to
sqlite3_exec or sqlite3_step can return SQLITE_BUSY if another process
has an exclusive (write) lock on the database. We do not check for
SQLITE_BUSY in any of our calls to sqlite3_exec or sqlite3_step, nor
do we install a busy handler with sqlite3_busy_handler or
sqlite3_busy_timeout. Thus these get turned into Subversion errors.

What should the policy be? Is it OK that mergeinfo reads can fail?
Should we use a timeout? Try a fixed number of times to reread? Keep
trying until we can read?

(I have not actually seen this failure case in practice. However, I
did some experimental work last night with storing revprops in sqlite3
using code with the same structure as the mergeinfo code, and I
definitely did trigger this sort of problem when I tried to read with
one process while writing with another.)

--dave

-- 
David Glasser | glasser_at_mit.edu | http://www.davidglasser.net/
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Thu Jun 7 18:00:11 2007

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.