[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Expected speed of commit over HTTP?

From: Johan Corveleyn <jcorvel_at_gmail.com>
Date: Thu, 6 Jul 2017 16:27:01 +0200

On Thu, Jul 6, 2017 at 4:10 PM, Philip Martin <philip_at_codematters.co.uk> wrote:
> Paul Hammant <paul_at_hammant.org> writes:
>> I'm making each revision with..
>> dd if=/dev/zero bs=1M count=500 2>/dev/null >
>> path/to/file/under/versionControl.dat
>> .. which is random enough to completely thwart delta analysis of the file.
>> That's slow enough in itself, but I'm only checking the time of the commit.
> The standard client always sends deltas even for cases like yours where
> the delta is not really an advantage. On receiving the delta the server
> has to first reconstruct the full text and then construct a second delta
> to store in the repository.
> You can make commits a bit faster by using svnmucc without a working
> copy, the svnmucc client always sends a delta against an empty file
> which is faster to calculate than the standard client delta against the
> previous file contents:
> svnmucc put some/file URL
> You can make commits even faster by enabling SVNAutoversioning on the
> server and using curl as your client as then the client doesn't
> calculate delta at all:
> curl -XPUT @some/file URL
> The curl commit will still involve calculating a delta on the server.
> On my machine:
> dd: 3.3 sec
> svn commit: 37 sec
> svnmucc commit: 26 sec
> curl commit: 9.5 sec

[ cc += Stefan Fuhrman, who might have some more ideas about this ]

If your SVN server is version 1.9 with FSFS backend, your could try if
the setting "compression-level" in the deltification section in
$REPOS/db/fsfs.conf makes a difference:

### After deltification, we compress the data through zlib to minimize on-
### disk size. That can be an expensive and ineffective process. This
### setting controls the usage of zlib in future revisions.
### Revisions with highly compressible data in them may shrink in size
### if the setting is increased but may take much longer to commit. The
### time taken to uncompress that data again is widely independent of the
### compression level.
### Compression will be ineffective if the incoming content is already
### highly compressed. In that case, disabling the compression entirely
### will speed up commits as well as reading the data. Repositories with
### many small compressible files (source code) but also a high percentage
### of large incompressible ones (artwork) may benefit from compression
### levels lowered to e.g. 1.
### Valid values are 0 to 9 with 9 providing the highest compression ratio
### and 0 disabling it altogether.
### The default value is 5.
# compression-level = 5

There is also the mod_dav_svn config directive SVNCompressionLevel [1]
and the client-side option http-compression [2] which you can try ...
don't know if those are a significant factor in your case.

[1] http://svnbook.red-bean.com/en/1.7/svn.ref.mod_dav_svn.conf.html

[2] see your ~/.subversion/servers file, or you can specify it for a
single command with an option:
    --config-option servers:global:http-compression=no

Received on 2017-07-06 16:27:33 CEST

This is an archived mail posted to the Subversion Dev mailing list.