[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Making blame even faster

From: Branko ─îibej <brane_at_xbc.nu>
Date: 2005-02-10 03:57:32 CET

Daniel Berlin wrote:

>1. For a simple byte level blame, we should be able to get the answers
>from the diff format alone, without actually expanding it. You can
>simply mark the affected byte ranges, and stop when the whole file has
>been affected. What the output looks like is up for grabs.
>2. You could actually do a similar thing with line level blame.
>Discover the line byte ranges of the current revision, throw it into an
>interval tree, and then do queries to see which line a given diff
>affects, based on it's byte ranges. Stop when every line has been
>2 seems a bit harder than 1.
>I've been out of it for a while, so maybe there is a something about
>implementing #1 i have missed.
>Have i missed anything about #1 that makes it hard?
If by "diff format" you mean the binary delta, then... There was a time
when I thought this would be possible. Now I'm not so sure. The trouble
is that the vdelta doesn't generate an edit stream, it generates a
compressed block-copy stream. Which means that can never be 100% sure,
just from looking at the delta, which bytes in the target are really new
and which are just (offset) copies from the source. The only blocks you
can really be sure about are those that are represented by new data in
the delta (either NEW blocks or target copies that take data from NEW
blocks). This problem is made worse by our use of skip deltas in (at
least) the BDB back-end.

I agree that it would be nice if the server could generate some sort of
byte-range oriented info, but I don't think it can be done just by
looking at the deltas. It's sad, I know...

-- Brane

To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Thu Feb 10 03:58:42 2005

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.