On Sat, Mar 09, 2002 at 09:02:05AM -0500, Kevin Pilch-Bisson wrote:
> On Sat, Mar 09, 2002 at 12:40:14AM -0500, Daniel Berlin wrote:
> > On Fri, 2002-03-08 at 18:57, Ben Collins wrote:
> > > I've tracked down the overusage of locks to the stabilize_node()
> > > function in dag.c. This is the call where we traverse the commited tree
> > > and mark everthing immutable.
> > >
> > > During this portion of my import, some 200,000 locks and 885 lock
> > > objects are used up, before being released. I'm trying to find out of
> > > there is a way we can explicitly release locks as we go (using
> > > db->lock_vec or something).
> > Well, hmmmm.
> > 200,000 locks?
> > For a given transaction (and all it's child tranasctions, whose locks do
> > *not* go away until the parent is committed/aborted, not when *the
> > child* is committed/aborted), you shouldn't be getting that many locks.
> The problem is that we need to lock every single record in the BTREE (for
> large imorts) while we do the stabilization. Thus for large commits, we run
> out of locks. The tree Ben Collins is using is on the order of 10,000 files,
> 1,000 dirs). Thus we need locks for each of those entries in nodes, in reps,
> one for each record of the dup-key based strings, revisions, and transactions
> tables. If you say there are two locks for each node in strings, then we are
> up to 66,000 locks already, not counting anything that might be read-write.
The stabilization seems to be even more of a problem for a large commit
to a large existing tree. Since I figured out how to get rid of the db4
lock mem segment ENOMEM problems, I went back to be original test case
(linux-kernel bk 1.001 import, then apply 1.002 patch and commit).
The commit starts off as well as expected. During the intial file
listing (Sending ...), it consumes some 230megs of RAM (much better than
before). Then it sends the deltas (Transfering...), consuming still
Then it gets to the stabalization. This is where things go _really_ bad.
Note, this test was on a machine with 128megs RAM, 550Megs swap.
During stabalization, memory starts to be consumed quickly, on the order
of about 5megs per second. This continues till it gets to about 1.9gigs
of memory usage and the VM kills it. You may recall that I said the
machine has only ~680Megs of memory. For those that don't know, Linux
allocates memory via COW (copy on write), which means the memory
allocated to a process is not really used until the process writes to
the pages. What does this mean? Most of the memory that svn is
allocating is _wasted_ during this process. I'm guessing stringbuf's
(probably in skel.c during parse/unparse when it changes mutable to
/ Ben Collins -- Debian GNU/Linux -- WatchGuard.com \
` firstname.lastname@example.org -- Ben.Collins@watchguard.com '
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Sat Mar 9 16:59:32 2002