[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

merge-tracking lookup can fail to get a read lock

From: David Glasser <glasser_at_mit.edu>
Date: 2007-06-09 21:27:24 CEST

On 6/7/07, David Glasser <glasser@mit.edu> wrote:
> My impression was that the sqlite3 mergeinfo database is intended to
> follow the same locking rules as fsfs: reads never need to obtain a
> lock. Our current usage of sqlite3 does not seem to work that way.
> While there are no consistency problems, any of the calls to
> sqlite3_exec or sqlite3_step can return SQLITE_BUSY if another process
> has an exclusive (write) lock on the database. We do not check for
> SQLITE_BUSY in any of our calls to sqlite3_exec or sqlite3_step, nor
> do we install a busy handler with sqlite3_busy_handler or
> sqlite3_busy_timeout. Thus these get turned into Subversion errors.
>
> What should the policy be? Is it OK that mergeinfo reads can fail?
> Should we use a timeout? Try a fixed number of times to reread? Keep
> trying until we can read?

Here's a patch that adds a 10-second timeout (with sqlite's built-in
backoff-and-retry algorithm). It seems like that should be more than
enough. Any thoughts?

[[[
For the mergeinfo database, ask sqlite3 to retry for up to ten seconds
any query that fails to get a lock.

* subversion/libsvn_fs_util/mergeinfo-sqlite-index.c
  (open_db): Call sqlite3_busy_timeout immediately after opening
   the database.
]]]

--
David Glasser | glasser_at_mit.edu | http://www.davidglasser.net/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Received on Sat Jun 9 21:27:40 2007

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.