I'd imagine a configurable garbage collection timeout on abandoned
transactions. (i.e. ones that aren't currently having code xecuting against
them) Something along the lines of timing out in something less than a day
(perhaps 30min?) , plus possibly additional code to detect Denial Of Service
attacks to prevent the # of in progress transactions from going through the
From: Jim Blandy [mailto:firstname.lastname@example.org]
> I don't see a mechanism in there where I can save a transaction off to the
> side, to be picked up later by a different process or thread.
Yep --- that is deliberately omitted, since I don't know yet the best
way to get a handle on a transaction.
> Hmm. And I recall you posted a question about "how long is 'long'?". Not
> gonna go find it. Basically, I'd like to open a transaction, process some
> operations (that occur over multiple HTTP requests; therefore, over
> processes/threads), and then commit/abort the transaction.
> The operations would look something like:
> persistable_id = svn_fs_persist_txn
> ... ops ...
> } repeat
> The sequence will be serialized, but will occur across multiple
> threads/processes. As a result, the transaction ID needs to be an "int"
> some kind, or a blob of bytes, either of which I can persist to a file.
Right, that much I understood from your initial request. Perhaps I
didn't phrase my question well.
How do you plan to recognize and clean up abandoned transactions?
Since transactions are persistent, it's not safe for the filesystem to
clean them up when it discovers it's been restarted, as it can for
Berkeley DB-level transactions.
How will you make sure the database doesn't become crowded with
transactions long forgotten? Should the filesystem provide an
interface for listing all transactions in progress?
I don't need to know this stuff to write the filesystem. I'd be very
surprised if it makes any difference in the design. I just
want to understand the outside world a bit.
Received on Sat Oct 21 14:36:07 2006