[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

svnserve malloc error

From: Brian O'Meara <bcomeara_at_nescent.org>
Date: 2007-09-12 20:58:36 CEST

I am trying to update ("svn update") a directory which contains 196
subdirectories, each with 2800 files (so, 548,800 files in total, all
individually under ~40 K in size). The update on the client computer
fails with the following error message:

svnserve(20088) malloc: *** vm_allocate(size=8421376) failed (error
code=3)
svnserve(20088) malloc: *** error: can't allocate region
svnserve(20088) malloc: *** set a breakpoint in szone_error to debug
svn: Connection closed unexpectedly

My subversion server is a 2 x 2.66 GHz MacPro with 2 GB of RAM (OS
10.4.10); monitoring svnserve during this process, the amount of
memory it uses continues to rise until it crashes (real memory size
for svnserve when it crashes is at 0.99 GB, virtual memory is at 1.91
GB). I think that a related issue has come up before, in 2004 (mail
from the discussion board attached below). I am running svnserve
version 1.4.5 (r25188). Am I doing something wrong, or is there some
sort of solution? I've been using subversion for a few years now, but
only recently have tried dealing with this many files in a directory.

Thank you,
Brian O'Meara

Earlier message about svnserve memory:

http://subversion.tigris.org/servlets/ReadMsg?list=dev&msgNo=60871

Date: Sun, 21 Mar 2004 14:23:35 -0500
From: Greg Hudson <ghudson@MIT.EDU>
Subject: Memory usage on server side with svnserve

Larry Shatzer wrote:
> I am trying to check in a directory that has just under 6,000 files
> (98% are new, with the rest of the 2% with changes). The total size
> of the directory is around 5 megabytes. (lots and lots of small XML
> files).

Yeah, I would expect this commit to take about 48MB of memory,
possibly more.

[The rest of this is going to be more comprehensible to other
developers than to Larry.]

The problem is that a subpool is created for each file during the
first part of the commit, and it isn't destroyed until the "Sending
file contents..." part of the commit. (File data is sent after
directory data so that you can learn of conflicts sooner.) Since
subpools take up a minimum of 8K, you wind up using 8K * 6000 bytes of
memory--possibly more, due to various allocation inefficiencies.

Various theories about what subsystem is at fault:

   1. The APR pool system, for using such a large minimum allocation
      size.

   2. The commit design, for not being streamy in this regard.

   3. The ra_svn editor driver, for using a separate subpool for each
      file.

Looking harder at #3: the commit_utils driver avoids this problem on
the client side by lumping all file-with-text-mod batons into a single
pool, which it can do because it knows exactly how long the files are
going to live. The ra_svn driver doesn't have that knowledge, but
perhaps it could do better by having a single reference-counted pool
for files. During an import (where file data is not held until the
end), the refcount would drop to zero and the pool would be cleared
after each file, but during a commit, all files would live in the same
pool. There's no way to know ahead of time whether a file is going to
have a text mod, though, so it couldn't be quite as efficient as the
client-side editor driver.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Wed Sep 12 23:54:40 2007

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.