[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

RE: Re: Keeping NodeRevisionIDs from being unbounded

From: Bill Tutt <rassilon_at_lyra.org>
Date: 2002-03-09 22:53:52 CET

> From: Greg Hudson [mailto:ghudson@MIT.EDU]
> On 8 Mar 2002, Karl Fogel wrote:
> > I think it won't be necessary to do that. If we have to ask for
> > predecessors at all, the number of askings is going to be
proportional
> > to the size of the change, and having to do a number of DB access
> > proportional to the size of the change is no big deal -- I mean,
this
> > is a *commit*, after all, we just sent the data over the network.
:-)
>
> I'm a little nervous about the implications for skip-deltas, assuming
we
> ever do them (and I hope we do). To do the re-deltifications on a
commit,
> we need to walk back some number of predecessors from the current
> node-revision ID--only two on average, but sometimes a whole bunch.
In
> particular, at each power of two we re-deltify the oldest revision
against
> the head, requiring a complete walk of the predecessors back to the
root
> of the node.
>
> For this kind of operation, it would be most ideal to have random
access
> to the node-revision ID list for a node, e.g. by forcing the
> node-revisions to be 100.1, 100.2, etc.. Forcing that does require
the
> "stabilization walk" at the end of a commit which we'd like to get rid
of,
> though.
>
> I guess that could argue for Branko's opportunistic re-deltification
> approach, but I don't really like how that approach only yields
reasonable
> performance the second time you access an old revision.
>

In a follow on email to my original note, I mentioned that the only
extra cost involved in finding the set of predecessors of a NodeChange
(aka node-revision) is to just load the NodeChange row and parse the
skel for just the in-row predecessor string, and then do whatever you
need to do once you acquire that information. Having a column that is
unbounded is much saner than having your PK be unbounded. This applies
to SQL stores specifically, but eventually BDB would probably encounter
issues with such unbounded PKs in any event.

The proposal in no way drastically increases (i.e. dependant on the
number of ancestors) the # of database accesses it takes to find the
ancestry set. It goes from 0 accesses given a NodeRevisionID to 1 given
any NodeRevisionID.

Things would of course be entirely different if I was suggesting storing
ancestry information only in a normalized fashion, but I'm not. I
wouldn't mind having a normalized store as well though.

In a separate email thread Greg Stein mentioned that he wanted to move
the deltification pass outside of the actual svn transaction commit
path. (i.e. outside of the BDB transaction that we currently use for
commiting our SVN transaction) I think Greg suggested doing this on an
asynchronous basis after the svn transaction commit succeeds. Doing the
deltification in a different (or indeed many different) BDB transactions
will drastically reduce the lock usage for having to possibly touch so
many more rows. Esp. since BDB doesn't seem to be too intelligent about
lock escalation.

Does this make sense?
Bill

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Sat Mar 9 22:54:42 2002

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.