I'm trying to commit a large directory (about 1Gb of data in 20,000
files in 6000 directories)(though those figures also include ignored and
not-added files, so I'd guess knock 10-20% off that).
This is a new set of source code from a vendor, roughly identical to one
I did a few months ago using svn 1.5.5.
Previously, the directory was added and committed without too much of a
problem, it took a little while, but I consistently got 2kps on my
checkins all through the process (more when I turned the virus checker
off on the server!).
Now I'm trying to add the same using svn 1.6.2, and it's not too happy.
Using TortoiseSVN I see it is very quick at first, but rapidly slows
down after 30 MB has been transferred. After 40Mb, the transfer rate has
slowed considerably. The CPU usage is right down (from 17% to 0.5%), and
the memory usage increases steadily. When I killed it (using Tortoise)
is had transferred 43Mb, but had used 183Mb RAM and had also used 183MB
I've tried it using the svn command line, but that shows the same
behaviour (except I can't see how much transferred, just a command
window full of dots).
Short of committing each subdir individually, can anyone say why the
performance has dropped so dramatically for large datasets? This was
fine with 1.5.5, otherwise I'd have just assumed I was trying to push it
I'm happy to test with debug version or provide a crashdump if anyone
wants to provide one, but bear in mind, I'm running on Windows (XPsp3,
dual CPU box w 3Gb RAM)
To unsubscribe from this discussion, e-mail: [users-unsubscribe_at_subversion.tigris.org].
Received on 2009-05-29 11:59:16 CEST