[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: My take on the diff-optimizations-bytes branch

From: Johan Corveleyn <jcorvel_at_gmail.com>
Date: Mon, 17 Jan 2011 03:12:47 +0100

On Sat, Jan 8, 2011 at 6:50 AM, Stefan Fuhrmann
<stefanfuhrmann_at_alice-dsl.de> wrote:
> On 03.01.2011 02:14, Johan Corveleyn wrote:
>>
>> On Mon, Jan 3, 2011 at 12:13 AM, Johan Corveleyn<jcorvel_at_gmail.com>
>>  wrote:
>>>
>>> On Sun, Jan 2, 2011 at 10:04 PM, Johan Corveleyn<jcorvel_at_gmail.com>
>>>  wrote:
>> For now, some feedback on the rest of the patch:
>>
>> [[[
>>
>>> -          DECREMENT_POINTERS(file, file_len, pool);
>>> -        }
>>> +        DECREMENT_POINTERS(file, file_len, pool);
>>>      }
>>>
>>> +  return SVN_NO_ERROR;
>>> +}
>>> +
>>> +/* Find the prefix which is identical between all elements of the FILE
>>> array.
>>> + * Return the number of prefix lines in PREFIX_LINES.  REACHED_ONE_EOF
>>> will be
>>> + * set to TRUE if one of the FILEs reached its end while scanning
>>> prefix,
>>> + * i.e. at least one file consisted entirely of prefix.  Otherwise,
>>> + * REACHED_ONE_EOF is set to FALSE.
>>> + *
>>> + * After this function is finished, the buffers, chunks, curp's and
>>> endp's
>>> + * of the FILEs are set to point at the first byte after the prefix. */
>>> +static svn_error_t *
>>> +find_identical_prefix(svn_boolean_t *reached_one_eof, apr_off_t
>>> *prefix_lines,
>>> +                      struct file_info file[], apr_size_t file_len,
>>> +                      apr_pool_t *pool)
>>> +{
>>> +  if (file_len == 2)
>>> +    SVN_ERR(find_identical_prefix_fast(reached_one_eof, prefix_lines,
>>> +                                       file, pool));
>>> +  else
>>> +    SVN_ERR(find_identical_prefix_slow(reached_one_eof, prefix_lines,
>>> +                                       file, file_len, pool));
>>> +
>>
>> Small problem here (not really visible in practice, but potential
>> change in behavior none the less): if either of the above function
>> calls exited early because of reached_all_eof, this function should
>> also exit early.
>
> Yep, that is just an artifact caused by moving code
> into a sub-function.

Ok, that's no problem anymore, since I removed the early-exit in
r1057435. It was actually not correct, and was the main reason why
some tests kept failing when I ported it to diff3 (see that revisions
commit message for more details).

[ ... snip ... ]

>> Also, it would really help if you could split up this patch (though
>> this was certainly a good way to try it out and give me an idea of the
>> possibilities).
>
> Giving you some ideas was the main point of it.
>
> If I find some time, I will try to reduce the patch
> (no fast / slow separation). But since the heap
> to stack conversion spreads across all of the
> code, it is hard to split the patch.
>
>> It would be interesting to see where the biggest gains
>> are coming from (I'm guessing from the "per-machine-word"
>> reading/comparing; I'd like to try that first, maybe together with the
>> allocating of the file[] on the stack; I'd like to avoid
>> special-casing file_len==2, splitting up functions in *_slow and
>> *_fast, because it makes it a lot more difficult imho (so I'd only
>> like to do that if it's a significant contributor)). But if you want
>> to leave that as an exercise for the reader, that's fine as well :-).
>
> Exercise is certainly not a bad thing ;)

Ok, I'll give it a go one of the coming days, now that I've got
diff3/diff4 working, and I've got a suitable
big-files-generating-script for creating reproducible test-files.

> But I think, the stack variable is certainly helpful
> and easy to do. While "chunky" operation gives
> a *big* boost, it is much more difficult to code if
> you need to compare multiple sources. It should
> be possible, though.
>
> The great advantage of the file_len==2 case is
> that this is the only time-critical operation (a
> true merge over long files is rare). Also, many
> constructs collapse if the count is fixed to 2:
>
> for (i = 1, is_match = TRUE; i<  file_len; i++)
>  is_match = is_match&&  *file[0].curp == *file[i].curp;
> while(is_match)
> {
> ...
>  for (i = 1, is_match = TRUE; i<  file_len; i++)
>    is_match = is_match&&  *file[0].curp == *file[i].curp;
> }
>
> becomes
>
> while(*file[0].curp == *file[1].curp)
> {
> }

Hm, that's true, but I'm not sure that will have big impact (but like
you say, we'll need to do more tests). And now that I've read about a
really bad merge-case on the users-list [1], I'm thinking that diff3
should probably also get all the perf-improvement that we can give it.

Actually, what's a little bit troubling is that there are currently
only 3 possible "file_len's", of which only 2 are used in practice:
diff2 and diff3 (diff4 is not used in svn core, only in
tools/diff/diff4). So, if we make a slow and a fast version, we could
just as well make a XXX2 and an XXX3 version explicitly, and dispense
with the need to pass arrays and array lenghts. Hmmm... generic (but
more complex) code vs. a couple of specialized versions.

>
> But basically, we will need to run some more tests.
>
> -- Stefan^2.
>
>

Cheers,

-- 
Johan
[1] http://svn.haxx.se/users/archive-2011-01/0245.shtml
Received on 2011-01-17 03:13:46 CET

This is an archived mail posted to the Subversion Dev mailing list.