I'd like to discuss possible solutions to issue #1573. From the
If you add 5 bytes to a 256 meg file and commit, it takes many
minutes for the svn_fs_merge() to return success, because it's
deltifying the previous version of the file against the new
Because this is happening as a 'builtin' part of a commit, it
destroys svn's ability to commit changes to large files. When
operating over dav, neon times out waiting for the final 'MERGE'
command to return success. And for people using ra_svn, it's still
not acceptable for users to wait many, many minutes for the commit
The fact that the repository stores non-HEAD versions of files as
deltas is an optimization (a deliberate space/time tradeoff) and an
internal implementation. We shouldn't be punishing users for this.
There are various proposed solutions in the issue. But for now, I'd
like to talk just about solutions we can implement before 1.0 (i.e.,
before Beta, i.e., before 0.33 :-) ). The two that seem most
1. Prevent deltification on files over a certain size, but create
some sort of out-of-band compression command -- something like
'svnadmin deltify/compress/whatever' that a sysadmin or cron job
can run during non-peak hours to reclaim disk space.
2. Make svn_fs_merge() spawn a deltification thread (using APR
threads) and return success immediately. If the thread fails to
deltify, it's not the end of the world: we simply don't get the
(2) looks like a wonderful solution; the only thing I'm not sure of is
how to do it inside an Apache module. Does anyone know?
I assume that (1) would involve a repository config option for the
file size. Note also that we used to have an 'svnadmin deltify'
command and could easily get it back (see r3920), so (1) may not
actually be as much work as it looks like. Those who don't want to
run the cron job would just set the size limit to infinity, and always
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Tue Nov 4 06:23:31 2003