Hi all,
Since merge will be an important topic for 1.9, I ran a quick
test to see what how we are doing for small projects like SVN.
Please note that my local mirror of the ASF repo is about 2
months behind (r1457326) - in case you want to verify my data.
Summary:
Merges can be very slow and might take hours or even days
to complete for large projects. The client-side merge strategy
inflates the load on both sides by at least a factor of 10 in the
chosen scenario. Without addressing this issue, it will be hard
to significantly improve merge performance in said case but
addressing it requires a different strategy.
Test Scenario:
A catch-up merge after various cherry picks.
$svn co svn://server/apache/subversion/branches/1.7.x .
$svn merge svn://server/apache/subversion/trunk . --accept working
Findings:
Server vs. client performance
* With all caches being cold, both operations are limited by
server-side I/O (duh!). FSFS-f7 is 3 .. 4 times faster and
should be able to match the client speed within the next
two months or so.
* With OS file caches hot, merge is client-bound with the
client using about 2x as much CPU as the server.
* With server caches hot, the ratio becomes 10:1.
* The client is not I/O bound (working copy on RAM disk).
How slow is it?
Here fastest run for merge with hot server caches. Please
note that SVN is a mere 2k files project. Add two zeros for
large projects:
real 1m16.334s
user 0m45.212s
sys 0m17.956s
Difference between "real" and "user"
* Split roughly evenly between client and server / network
* The individual client-side functions are relatively efficient
or else the time spent in the client code would dwarf the
OS and server / network contributions.
Why is it slow?
* We obviously request, transfer and process too much data,
800MB for a 45MB user data working copy:
RX packets:588246 (av 1415 B) TX packets:71592 (av 148 B)
RX bytes:832201156 (793.6 MB) TX bytes:10560937 (10.1 MB)
* A profile shows that most CPU time is either directly spent
to process those 800MB (e.g. MD5) or well distributed over
many "reasonable" functions like running status etc with
each contributing 10% or less.
* Root cause: we run merge 169 times, i.e. merge that many
revision ranges and request ~7800 files from the server. That
is not necessary for most files most of the time.
Necessary strategic change:
* We need to do the actual merging "space major" instead of
"revision mayor".
* Determine tree conflicts in advance. Depending on the
conflict resolution scheme, set the upper revision for the whole
merge to the conflicting revision
* Determine per-node revision ranges to merge.
* Apply ranges ordered by their upper number, lowest one first.
* In case of unresolvable conflicts, merge all other nodes up to
the revision that caused the conflict.
If there have been no previous cherry pick merges, the above
strategy should be roughly equivalent to what we do right now.
In my test scenario, it should reduce the number of files requested
from the server by a factor or 5 .. 10. Operations like log and
status would need to be run only once, maybe twice. So, it
seems reasonable to expect a 10-fold speedup on the client
side and also a much lower server load.
Fixing this scenario will drastically change the relative time
spent for virtually all operations involved. So, before fixing it,
it is not clear where local tuning should start.
-- Stefan^2.
--
*Join one of our free daily demo sessions on* *Scaling Subversion for the
Enterprise <http://www.wandisco.com/training/webinars>*
*
*
Received on 2013-05-27 14:14:59 CEST