On Fri, Jan 8, 2010 at 2:16 PM, Erik Huelsmann <ehuels_at_gmail.com> wrote:
>> Anyway, I can understand Andreas' desire to clean out old history
>> that's no longer needed, and that's only slowing things down. But
>> sometimes, having that entire history is very useful, so we chose not
>> to do that. Instead, we learned to live with the current limitations
>> for now, and we hope they will be improved someday...
>
> Ok. I see your problem now. Did you know you can restrict blame to a
> subrange of the revisions? That will give you at least a way to limit
> the 4 hours.
Yes I know, but that's not all that useful. If the line I'm looking at
was really last changed in revision 328, then I want to know that (and
want to see that revision's log message). But it might also have been
edited in revision 105000. I cannot easily discern those two cases
unless I blame the whole revision range.
Also interesting to know: if I use one of the "ignore-whitespace"
options, it's a lot faster, down to 1 hour (though that's still quite
unusable). My hypothesis here is that some very large delta's become
very small because they are changing whitespace in the entire file
(happens from time to time, if someone accidentally re-indents the
file, or changes tabs to spaces and such).
> Comparing with CVS is logical, but CVS has these values
> precomputed: ready for you to use. So, obviously, it'll be faster than
> any scheme which needs to do calculations to find the same
> information.
Yes that's true. CVS just has that information readily available
because that's the way it stores its versioned data.
I think the only way SVN can come close to acceptable performance for
this use case (large file, lots of revs) would be to precompute and
cache that data on the server side, so it has the data ready like CVS
does. If anyone would implement this, I for one would be very happy
:D.
Regards,
Johan
Received on 2010-01-08 14:50:18 CET