[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: A forward-thinking thought about deltas in FS design

From: Greg Hudson <ghudson_at_MIT.EDU>
Date: 2003-07-07 09:15:27 CEST

On Mon, 2003-07-07 at 02:21, Tom Lord wrote:
> > You don't typically get performance "guarantees" with a
> > cache because you don't know if it has been populated with the fast path
> > you want since the last time it has been thrown away.
>
> That's false.
>
> You have to consider (a) your algorithm that decides what to cache and
> what to throw away; (b) the size of your cache.
>
> Under (often quite reasonable) interactions of (a) and (b), you can
> make strong guarantees.

You can really only make the strong guarantees if you always store a
skip-delta immediately upon commit and never throw them away. That's a
very degenerate case of "caching." (I grant that if you throw away a
miniscule fraction of the skip-delta information you will, in some
cases, only get a minor degradation in read performance. But because
you'd get an even less significant improvement in throughput or space,
there is no advantage in thinking of the skip-delta store as a cache.)

> Given a cache such as I described, the hard guarantees you're worried
> about obtain. You can back it up if you like. You can so send in a
> junior-clueless-admin to run "rm -rf" on a portion of the cache -- and
> then what happens? the performance guarantees are lost for a few
> hours. the system gracefully carries on anyway. then everything is
> back to normal.

For a large project, this means for that period of time (which may well
be days, not hours) the system doesn't work. If it takes two days
instead of half an hour to check out the latest version of Mozilla, the
system is not carrying on gracefully.

> > > The caching/memoizing approach would make it
> > > easier, I think, to be computing skip-deltas concurrently and
> > > opportunistically.
>
> > Why would you want to?
>
> For better actual, real-world performance.

You're skipping too many steps here. How does it result in better
real-world performance to compute a single delta and then (concurrently
or "opportunistically," whatever that means) compute the skip delta you
need for fast access?

> > It doesn't take more time (on average) to
> > compute the skip-delta than it does to compute the single delta.
>
> That isn't the point. It's not relevant to anything I said.

When you skip too many steps, I have to guess at your intermediate
thought processes. When I guess wrong, I may say something irrelevant
to what you were thinking, though it's certainly relevant to something
you could have been thinking given what you wrote. In this case, I was
theorizing that you thought one could get better throughput in the
primary store by taking less time to compute a single delta, an argument
which doesn't hold. I still don't know what your actual argument is on
this point.

(Perhaps you're saying you'd get faster throughput in the primary store
because you have to write less data? That's too simplistic; you don't
get faster compression throughput by running bzip2 than you do with
gzip, even though you're writing less data.)

> > > Read-only mirrors and clients could be building their own caches.

> > Again, no advantage versus my approach; read-only mirrors and clients
> > could be building their own caches there too.

What I should have said here is: in my approach, a read-only mirror of
the primary repository gets fast access just like the primary does.
Being able to build your own cache of a slow primary isn't any better.

> In general, you seem to be armed with a lot more basic theory than
> non-basic theory or practice.

I'm not really interested in having an ad hominem argument with you.

> > > Given the kinds of access needed to "row 0",
> > > perhaps it could be stored more compactly and updated more quickly.
>
> > I don't see this translating into a practical advantage.
>
> Um, much faster and more space-efficient write-txns?

I don't think the write transactions become faster. In my approach, you
pick an appropriate rev to delta against, produce it (which is fast,
because producing a given rev is always fast), compute the delta, and
store that. In your approach, you pick the previous rev, produce it,
compute the delta, and store. Computing a delta between two dissimilar
files doesn't take more time than computing a delta between two similar
files, so we've taken the same length of time. (Perhaps you're
implicitly arguing that the client can do the work of computing the
single delta. Subversion isn't currently structured to allow that, and
the server probably couldn't ensure database integrity if it were, so
I'll assume that you aren't making that argument.) Then you also
compute a skip-delta and store it (you can't allow this process to fall
behind if you want guaranteed acceptable performance) which takes
additional time.

The write transactions to the primary store do take less space in your
scheme, by an average factor of lg(N) in the worst comparative case (I
am curious: in real projects, how much larger is the average delta
across M revs of a file than the average delta across one rev, as a
function of M?). But then you also have to go and add data to the
cache. So where is the practical advantage? If you want the
performance guarantees of the skip-deltas, you have to accept the cost;
splitting them apart does not make the total cost any less.

> > > A cache could easily revert to full-texts when skip-deltas become
> > > irrational.
>
> > Why would they be any more likely to become irrational than single
> > deltas would be?
>
> Hello?!?! Once again, large skip-deltas on busy files are going to
> converge on full-text sizes.

I think I understand what you're arguing. But if a skip-delta turns out
to be larger than the plain text (should never happen except by a tiny
margin, in the degenerate case where the target data is uncompressable
and gets no help at all from the source), it's just as easy to
substitute plain text in my scheme as it is in yours.

> > You've proposed a tiny little simplification of the primary storage
> > manager at the expense of decomposing the storage system into two parts,
> > radically increasing the overall complexity of the system. Not a win.

> ?

I don't see what's so hard to understand about this. A system composed
of a primary store and a cache is more complicated than a system
composed of only a primary store, unless both parts of the decomposed
system can be radically simpler than the unified primary store. Since
you have proposed only a tiny simplfication to the primary store, your
overall system would be more complicated.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Mon Jul 7 09:16:21 2003

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.