Bruce Cropley wrote:
> When we migrated, we started with Apache (no
> compression) and that was around 4 hours. Then we
> configured mod_deflate in http.conf, and tweaked the
> client file "servers" as required to get Apache style
> compression going. That only got it down to around 3
> We're also seeing speed problems locally. A checkout
> from CVS was around 12m here. With SVN it is 36m+.
> Updates and commits also feel slower, though we
> haven't timed them as much yet.
Subversion performance is definetly not the same as
CVS for some operations (checkouts, updates, commits,
etc). CVS has had a long time to optimise network
traffic, and it shows.
Your Melb -> LA setup sounds like it's bandwidth
limited. Have you used ethereal or some other tool
to actually measure how much data is going over the
wire? (and comparing with/without mod_deflate for
http access). Subsequent updates should be ok,
as Subversion only sends deltas over the wire,
but for initial checkouts, all the data must
I'm not sure on the CVS protocol, but the SVN HTTP
protocol has a reasonable amount of round-trips,
at least one for every file. If you've got a stack
of small files, a HTTP GET request will be sent
for each one. If you have high bandwidth, but high
latency, that's going to be a factor. svnserve is
probably better behaving.
Consider how often these slow operations will be
performed. You can often use "svn switch" to
quickly get a working copy of a branch. Tagging
is *very* fast when compared to CVS. You have
atomic commits and the ability to re-arrange
your repository without losing or breaking
history. Personally, these outweigh a bit
of a performance drop (consider if the performance
hit is for rare operations only).
And be thankful you're not using Clearcase :-)
(which is unusable in a WAN situation, and requires
an uber-complicated "multi-site" configuration).
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Fri Apr 1 23:30:03 2005