[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Making blame even faster

From: Daniel Berlin <dberlin_at_dberlin.org>
Date: 2005-02-10 05:37:10 CET

On Thu, 2005-02-10 at 03:57 +0100, Branko Èibej wrote:
> Daniel Berlin wrote:
>
> >1. For a simple byte level blame, we should be able to get the answers
> >from the diff format alone, without actually expanding it. You can
> >simply mark the affected byte ranges, and stop when the whole file has
> >been affected. What the output looks like is up for grabs.
> >
> >2. You could actually do a similar thing with line level blame.
> >Discover the line byte ranges of the current revision, throw it into an
> >interval tree, and then do queries to see which line a given diff
> >affects, based on it's byte ranges. Stop when every line has been
> >affected.
> >
> >
> >2 seems a bit harder than 1.
> >
> >I've been out of it for a while, so maybe there is a something about
> >implementing #1 i have missed.
> >Have i missed anything about #1 that makes it hard?
> >
> >
> If by "diff format" you mean the binary delta, then... There was a time
> when I thought this would be possible. Now I'm not so sure. The trouble
> is that the vdelta doesn't generate an edit stream, it generates a
> compressed block-copy stream. Which means that can never be 100% sure,
> just from looking at the delta, which bytes in the target are really new
> and which are just (offset) copies from the source. The only blocks you
> can really be sure about are those that are represented by new data in
> the delta (either NEW blocks or target copies that take data from NEW
> blocks). This problem is made worse by our use of skip deltas in (at
> least) the BDB back-end.
>
> I agree that it would be nice if the server could generate some sort of
> byte-range oriented info, but I don't think it can be done just by
> looking at the deltas. It's sad, I know...
>
After looking at the real slow,i'm not actually worried about this so
much anymore.
For now it's actually not the main source of slowdown for blame.
See my followup mail on the delta combiner.

The delta combiner falls down badly when you have a even a moderate
number of target ops in the window. It goes n^2 and takes up a large
amount of the time

I have some preliminary numbers from switching from vdelta vs xdelta1 vs
a noop delta.

the xdelta numbers is literaly an xdelta implementation inserted where
we currnetly have vdelta. It's untuned, but is faster, and never
compresses against the target data already generated (IE generates
target ops).

The noop numbers are with the vdelta call simply replaced with the
generation of a single txdelta op representing the entire target text
(IE no compression, the delta is the fulltext).

Since they both generate svndiff, no client or server changes are
necessary. These are over local to avoid the turnarund.

All repos are fsfs, both loaded from the same fulltext dumpfile,
containing 2230 subversion revisions, which corresponded to ~2400 cvs
revisions of two frequently changed files in gcc revisions, one has 555
changes, the other the rest :).

No regressions tests fail on any of the above versions the repo text
md5s texts verify for every revision (IE expansion from deltas to
fulltexts is working okay). All were compiled from the same tree with
-O2.

They exist and were changed on a number of branches as well, and the
branches are included in the dumpfile.

vdelta based repo:

22931 repo/db/revs
9583 repo/db/revprops
32621 repo/db
8 repo/conf
20 repo/hooks
8 repo/locks
32569 repo

xdelta based repo:
[dberlin@dberlin repro]$ du repo3
29379 repo3/db/revs
9583 repo3/db/revprops
38973 repo3/db
8 repo3/conf
20 repo3/hooks
8 repo3/locks
39017 repo3

~20% size increase.

noop based repo:
450835 repo2/db/revs
9583 repo2/db/revprops
460430 repo2/db
8 repo2/conf
20 repo2/hooks
8 repo2/locks
460474 repo2

About what you'd expect :)

However, watch this:

(again, over the wire is over file:///, but it still sends delta streams
anyway, because that's get_file_revisions does :P).

vdelta based repo, using vdelta over the wire:
time svn blame on file with the smaller number of revisions: 38 seconds
time svn blame on file with the larger number of revisions: 4 minutes 38
seconds

vdelta based repo, using xdelta over the wire:
time svn blame on file with the smaller number of revisions: 38 seconds
time svn blame on file with the larger number of revisions: 4 minutes 16
seconds

noop based repo, using noop over the wire:
time svn blame on file with the smaller number of revisions: 24 seconds
time svn blame on file with the larger number of revisions: 1 minute 19
seconds

noop based repo, using xdelta over the wire:
time svn blame on file with the smaller number of revisions: 18 seconds
time svn blame on file with the larger number of revisions: 1 minute 14
seconds

xdelta based repo, using xdelta over the wire:
time svn blame on file with the smaller number of revisions: 14 seconds
time svn blame on file with the larger number of revisions: 1 minute 3
seconds

These numbers aren't counterintuitive if you think about it, and you are
more than welcome to try the timings yourself if you disbelieve them :P
The dumpfile is the one posted in the other message.

The xdelta one only has a 20% space premium over vdelta in size of data
it's "transmitting" around, but it causes no O(N^2) behavior in the
delta combiner because it has no target copy ops.

You'll note the delta combiner passes source copy ops through relatively
unchanged, it just fixes them up a small amount. It does *not* need to
call copy_source_ops on them.

The blame output of all of them is identical, as expected.
I am in the process of loading much larger repo (4 gig vdelta'd with
79,000 revisions, many files with thousands of revisions).

I unfortunately cannot create a noop vdelta version of this repo, as it
would take ~55 gig (the size of the original dumpfile with plaintexts),
and i don't have the extra space ATM.

However, loading is also going much faster on the xdelta version of this
repo, as one would expect.

Note that the profiles for the above xdelta and noop look like what you
would expect. The delta combiner is completely off the radar
because.there are no target ops. noop takes more time because it's
streaming more data through the streams.

I have attached the diff for the xdelta and noop implementation, so that
anyone can experiment with the above.
Remember that if you want to accurately test the speed of the delta
combiner with each, you need to redump the dumpfile i posted into
fulltext, and then reload the repo from a dumpfile with the approriate
delta algorithm defined, since it's all svndiff to the rest of the
subversion :)

If you look at profiles, you'll also see that the untuned xdelta is
about 5x faster than vdelta at just compressing the texts it sends over
"the wire".

The upshot of all of this is that i'll probably switch the gcc repo
server to use xdelta to generate it's diffs, until the delta combiner is
fixed so it's not so slow.

It also seems like it may make sense to allow xdelta as a client option,
since it's so much faster cpu wise than vdelta, even if you exclude the
target ops problem (IE just computing diffs). Again, to the rest of
subversion, it's all svndiff, so it's not like this is even an
incompatible change.

On a side note, i'm accepting comments on the general idea of including
xdelta as an option for 1.2 or 1.3. I could clean the diff up and
whatnot relatively quickly, and the algorithm is not very hard to
understand, etc.

--Dan

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Received on Thu Feb 10 05:38:26 2005

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.