[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

issue 860 investigation

From: Ben Collins-Sussman <sussman_at_collab.net>
Date: 2002-10-21 22:03:26 CEST

So issue #860 has been giving me nightmares. It looks like it's a
regression bug, whereby mod_dav_svn is no longer scaling. The bug
submitter claims that the server's memory use grows in proportion to
the amount of data it's writing to the server.

Here's my initial findings, just playing around with today's svn:

Issue 680 Initial Experiments
=============================

Using httpd-2.0.43 (released) and svn r3430.

Created a 100M random-data file:

   dd if=/dev/urandom of=foo count=100000 bs=1024

* svn add; commit over ra_dav: svn peaks at 6M, httpd peaks at 17M
* svn add; commit over ra_local: svn binary peaks at 22M

==> This isn't ideal, but it's not horrible either. It also makes a
lot of sense. You can see that libsvn_fs seems to be using 17M RAM to
commit the 100M file.

* checkout new working copy over ra_dav: svn peaks at 5M, httpd at 12M
* checkout new working copy over ra_local: svn binary peaks at 120M

==> This is absolutely freaky. Can anyone explain this???!

The other thing that's odd is that the original bug report claims that
the memory lossage happens during mod_dav_svn *commit*.

Why can't I reproduce the problem? Is it simply because I need to
actually try committing a 2GB file, not a 100M file?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Mon Oct 21 22:05:21 2002

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.