[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Memory leak with 1.6.x clients

From: Paul Burba <ptburba_at_gmail.com>
Date: Fri, 29 May 2009 15:02:28 -0400

First thing, what I'm about to describe in no way addresses the core
problem of issue #1964, i.e. that commit holds info in memory for each
committable and if you have a lot of committables you use a lot of
memory. It does however address the significant worsening of the
*general* problem of commit's memory usage from 1.5.x to 1.6.x which
Brian saw (other users have seen this too:

The problem was the introduction of svn_io_open_uniquely_named in 1.6
(r34152). When committing over ra_neon/ra_serf, svn_client__do_commit
loops over each committable and ultimately calls
svn_io_open_uniquely_named for each item:

         libsvn_subr/io.c: svn_io_open_uniquely_named()
         libsvn_subr/io.c: svn_io_open_unique_file3()
         libsvn_ra_neon/commit.c: commit_apply_txdelta()
         libsvn_wc/adm_crawler.c: svn_wc_transmit_text_deltas2()
         libsvn_client/commit_util.c: svn_client__do_commit()
         libsvn_client/commit.c: svn_client_commit4()
         svn/commit-cmd.c: svn_cl__commit()
         svn/main.c: main()

svn_io_open_uniquely_named tries to create a file named
'tempfile.tmp', then 'tempfile.1.tmp', 'tempfile.2.tmp', etc. One
problem is that this results in N failed attempts to find a uniquely
named file (where N is the number of committables already processed.
This makes the commit I/O bound and slows it down quite a bit -- the
total number of failed attempts being ((N(N+1)/2)-1) -- I'll look into
speeding this up...

...But worse is the fact that svn_io_open_uniquely_named() allocates
the potential file names (and converts them from UTF8 to the native
path format) in the result_pool, rather than the scratch_pool. If you
are committing say 10,000 items, you can easily run out of memory on a
32 bit machine. For example, using 1.6.x_at_37877* I tried to commit
(via neon) 10 added directories with 1000 added 4KB files in each
directory. The peak working set hit 1.20 GB before the commit failed
with out of memory.

In r37894 I changed svn_io_open_uniquely_named to use the scratch_pool
for potential file names (Greg's comments in the function foresaw
this, but he gave it a "meh" :-). With that fix in place, the above
commit of 10,000 files succeeds with the peak working set going *only*
to 194 MB.

Nominating this for backport to 1.6.x.


* ra_neon/ra_serf access on trunk isn't working for me on Windows at the moment.

On Wed, May 20, 2009 at 6:47 PM, Stefan Sperling <stsp_at_elego.de> wrote:
> On Wed, May 20, 2009 at 06:27:50PM -0400, Mark Phippard wrote:
>> On Wed, May 20, 2009 at 4:53 PM, Stefan Sperling <stsp_at_elego.de> wrote:
>> > If so, that's really bad. There were memory-related fixes for merge
>> > going into 1.6.2, but apparently they haven't fixed the problem you
>> > are seeing.
>> Let's not try to blame every problem on merge please?  The reporter
>> has clearly indicated that the problem happens during commit.  Commit
>> is commit, it does not matter if your working copy was edited by a
>> merge, a script or manual labor.
> Yes, that is true. It explains why the memory-related fixes for merge
> have nothing to do with it, which my quirky thinking failed to explain,
> before your comment.
> Thanks,
> Stefan

Received on 2009-05-29 21:02:48 CEST

This is an archived mail posted to the Subversion Dev mailing list.