On Tue, Feb 08, 2011 at 04:34:52PM +0100, Bert Huijben wrote:
> There is nobody actively working on status and there are no open
> issues on status to block branching...
There's the general wc-ng performance issue (but I don't think it has
an issue number attached to it right now).
> So if status is still a show-stopper we should focus on that instead
> of trying to improve 'svn proplist -R', which is not a very common
> operation in svn. (Merge works per node, so it doesn't benefit from
> the performance enhancements :( )
I started working on proplist because it's pretty simple and therefore
a good way to get going. My end goal is not to improve proplist.
I want to understand where wc-ng performance problems come from and
how to fix them.
I do intend to look at other subcommands once proplist has been dealt with.
>
> As things are today it looks like status will be released in its current form for 1.7.
> I don't see a problem with the current status performance from my perspective; unless somebody decides to just disable in memory temptables without profiling to fix some other issues in a different place.
Ah, so status performs well using per-node queries?
I thought it was still worse compared to 1.6.x for some use cases.
See http://svn.haxx.se/dev/archive-2010-09/0526.shtml
I think we should try to get trunk to perform at least as well as 1.6.x
for all uses cases before release.
> I don't have a problem with choosing file backed temptables over
> memory backed, but I do have an issue with doing it for theoretical
> reasons which are only tested under very specific circumstances. And a
> recursive proplist is not a very common and/or performance critical
> subversion operation from my perspective.
Of course proplist isn't very critical. But that's not the point.
The point is that Subversion should be written in a way that prevents
it from running out of memory when it operates on large working copies.
What if users run into problems with large working copies with 1.7?
Have we tested what happens with very large working copies?
Code that blows up if the input data set is too large is not acceptable.
Using file-backed temporary tables might be slower, but it's a safe default.
Received on 2011-02-08 17:15:59 CET