I see two ways to implement it:
1. The 'nice' ;-) way where the WA and RA layers collaborate to be able to
restore a working environment to its original state upon a network
2. The 'pragmatic start' in which the RA layer implements a (fairly
limited subset of) session management and tries to reconnect a broken
connection before bailing out.
Option 1 would of course be excellent and provide a lot of robustness but
perhaps option 2 would already improve a bit more robustness while being
far less effort to implement.
As to the concrete ideas, that's a bit out of my league at this point in
time since I'm not (yet? ;-)) familiar with the SVN sources. But I
thought I'd suggest the two alternatives that I see anyway. :-)
C. Michael Pilato <cmpilato_at_collab.net> writes:
> This is (obviously) a good idea. It's the implementation that I fear.
> Subversion's modularity keeps the network stuff (that moves the data
> which transforms a working copy from state to state) well away from the
> working copy management code (which actually understands those states).
> If a long-lived request, like a REPORT used during a checkout or update
> operation, was to die in the middle somewhere, the repository access
> layer would be completely oblivious to the details of the half-finished
> Because our most widely used data transfer API is the "editor", which
> demands depth-first tree ordering with no revisitation, the RA layer
> would need to somehow signal the WC layer about the network problem so
> that either the WC could rollback to the same state it had before the
> initial request, or at least be placed into a mode where it expected to
> see much of the same data changes that it already saw (and know that
> this is okay).
> Allow me to wonder aloud so that my ignorance is easier to see.
> Could this be accomplished strictly at the RA layer level? What if the
> RA modules kept track of exactly where they were in processing a request
> when the connection dropped, and then, on repetition, ignored everything
> up to that point. I'm thinking about the likes of 'wget -c' (continue
> where I left off). So, for example, if libsvn_ra_dav know it had read
> 12,015 bytes off the stream successfully before something died, it would
> repeat, ignore 12,015 bytes, and then continue processing at the
> 12,016th byte. The working copy code (and perhaps even the user) would
> be oblivious to a problem having occured. Something tells me it just
> ain't that simple.
> Could this be accomplished strictly at the client layer level? We've
> done a lot of work to make operations like checkouts and updates
> restartable. There are still bugs in these areas (switches, notably),
> and some stuff that basically works but looks scary (merges showing 'G'
> for everything previously merged), but if we could get our subcommands
> to a place where the larger operation could be safely re-attempted, and
> where the RA layers return clear indications (in the forms of
> predictable, dedicated error codes) of when a failure has occured for a
> network integrity reason, then perhaps this kind of re-attempt
> processing could happen even well up into the client libraries.
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Wed Oct 27 16:58:41 2004