Johan Corveleyn wrote on Wed, Dec 01, 2010 at 00:25:27 +0100:
> I am now considering to abandon the tokens-approach, for the following reasons:
...
> So, unless someone can convince me otherwise, I'm probably going to
> stop with the token approach. Because of 2), I don't think it's worth
> it to spend the effort needed for 1), especially because the
> byte-based approach already works.
>
In other words, you're saying that the token-based approach: (b) won't be
as fast as the bytes-based approach can be, and (a) requires much effort
to be spent on implementing the reverse reading of tokens? (i.e.,
a backwards gets())
> Any thoughts?
>
-tokens/BRANCH-README mentions one of the advantages of the tokens
approach being that the comparison is done only after whitespace and
newlines have been canonicalized (if -x-w or -x--ignore-eol-style is in
effect). IIRC, the -bytes approach doesn't currently take advantage of
these -x flags?
What is the practical effect of the fact the -bytes approach doesn't
take advantage of these flags? If a file (with a moderately long
history) has had all its newlines changed in rN, then I assume that your
-bytes optimizations will speed up all the diffs that 'blame' performs
on that file, except for the single diff between rN and its predecessor?
Received on 2010-12-01 03:40:35 CET