[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: My take on the diff-optimizations-bytes branch

From: Johan Corveleyn <jcorvel_at_gmail.com>
Date: Mon, 3 Jan 2011 00:13:51 +0100

On Sun, Jan 2, 2011 at 10:04 PM, Johan Corveleyn <jcorvel_at_gmail.com> wrote:
> On Sat, Jan 1, 2011 at 11:25 PM, Stefan Fuhrmann
> <stefanfuhrmann_at_alice-dsl.de> wrote:
>> Hi Johan,
>>
>> Thursday night I did something stupid and had a look at  how
>> svn blame could be made faster based on the HEAD code in
>> your branch.
>>
>> One night and most of the following day later, I think I made it
>> a few percent faster now. Some of the code I committed directly
>> to /trunk and you need to pull these changes into your branch
>> to compile the attached patch.
>>
>> Feel free to test it, take it apart and morph it any way you like --
>> I know the patch isn't very pretty in the first place. I tested the
>> patch on x64 LINUX and would like to hear whether it at least
>> somewhat improved performance on your system for your
>> svn blame config.xml use-case.
>>
>> -- Stefan^2.
>>
>> [[[
>> Speed-up of datasource_open() and its sub-routines
>> by a series of optimizations:
>>
>> * allocate the file[] array on stack instead of heap
>>  (all members become directly addressible through
>>  the stack pointer because all static sub-functions
>>  will actually be in-lined)
>> * minor re-arragements in arithmetic expressions to
>>  maximize reuse of results (e.g. in INCREMENT_POINTERS)
>> * move hot spots to seperate functions and provide a
>>  specialized version for file_len == 2
>>  (many loops collapse to a single execution, other
>>  values can be hard-coded, etc.)
>> * use seperate loops to process data within a chunk
>>  so we don't need to call INCREMENT_POINTERS & friends
>>  that frequently
>> * scan / compare strings in machine-word granularity
>>  instead of bytes
>> ]]]
>
> Hi Stefan,
>
> Thanks for taking a look. I really appreciate it.
>
> When I saw your first couple of commits, to the adler32 stuff, I
> already thought: hmmm, he's up to something :-). And after I saw your
> change to eol.c#svn_eol__find_eol_start, reading per machine-word, I
> was thinking: hey, I could really use something like that in the
> prefix/suffix scanning. Nice ... :-)
>
> (I had some trouble applying your patch. It doesn't apply cleanly
> anymore to HEAD of the branch (but most hunks were applied correctly,
> and I could manually change the rest, so no problem)).
>
> However, first tests are not so great. In fact, it's about 25% slower
> (when blaming my settings.xml file with 2200 revisions, it's spending
> about 90 seconds in diff, vs. around 72 with HEAD of
> diff-optimizations-bytes).
>
> Analyzing further, I can see it's spending significantly less time in
> prefix/suffix scanning, but more time in token.c#svn_diff__get_tokens
> (which is where the original algorithm gets the tokens/lines and
> inserts them into the "token tree"). This tells me it's not working
> correctly: either prefix/suffix scanning fails too soon, so there's
> much more left for the regular algorithm. Or it's just not functioning
> correctly.
>
> Looking at the result of the blame operation, it seems it's the
> latter. The final result of the blame is not correct anymore.
>
> I'll try to look some more into it, to see what's going wrong. Maybe
> you can also see it with a simple diff of a large file ... (for a good
> example in svn's own codebase, you could try
> subversion/tests/cmdline/merge-tests.py (pretty large, and almost 700
> revisions)).

Ok, by eliminating parts of the new stuff, I found out the problem is
in scan_backward_fast (part of suffix scanning). If I bypass that, and
always take scan_backward_slow, it works.

And it's fast too! It's taking only 58 seconds in "diff", vs. 72 for
the normal version. Splitting that up further, it's taking only 28
seconds in prefix/suffix scanning, vs. ~47 for the normal version (for
some reason, the lcs processing took a little longer, but that's
probably unrelated (only did a single run)). And that's with
scan_backward_slow. That's pretty amazing.

The code of scan_backward_fast is pretty difficult to understand for
me. So I'm not sure if I can spot the problem. I'll continue staring
at it for a little while ...

Cheers,

-- 
Johan
Received on 2011-01-03 00:15:31 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.