On Tue, Nov 01, 2011 at 10:38:07AM -0600, michael_rytting_at_agilent.com wrote:
> Not much of an improvement. "svn rm dir/*" now takes 2m6s vs 7s for "svn rm dir".
Before the patch, we had:
"svn rm dir/*" 6m15s 1.7.1
"svn rm dir" 8.5s 1.7.1
"svn rm dir/*" 1.14s 1.6.17
So this patch cut about 4 minutes of runtime, which is somwhat
significant but definitely not enough. But it's a step in the
> As a side note, I really think there is fundamentally something wrong of the performance of "svn rm" with large working copies. Here are some example times.
> svn rm <file> 7s
> svn add <file> 0.126s
> svn st <file> 2s
> svn blame <file> 0.2s
> svn lock <file> 0.12s
> svn unlock <file> 0.103s
> svn log <file> 0.089s
> svn revert <file> 0.133s
> svn info <file> 0.074s
> I'm assuming that all these commands are doing some form of sqlite
> database transactions, but the rm transaction, in particular, is very
> slow. Even when using a local working copy, I am seeing large
> discrepancies in the time it takes to run "svn rm" vs most other svn
> commands. It's just since the local working copy is faster overall,
> you are less likely to notice the large discrepancy in performance.
Yes, that looks bad. There might be a linear DB table scan involved
in 'svn rm' that becomes noticable on large WCs.
Do you see this difference only on NFS or also on local disk?
How are you measuring the time?
Note that most commands take at least one second because Subversion
waits for one second for filesystem timestamps to update after some
operations. To cut this deliberate delay out of the equation, do this:
Received on 2011-11-01 18:00:48 CET