[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

[PATCH] Pools space-time tradeoff

From: Ben Collins-Sussman <sussman_at_collab.net>
Date: 2002-05-22 19:15:42 CEST

[crossposted to APR and SVN dev lists.]

Hi, all. I've been working with Sander to resolve some pool problems.

** Here's the problem:

We have two use-cases in Subversion now that literally run out of
memory; the application footprint grows without bound until the OS
kills things. Here they are:

  1. 'svn checkout file:///repository -r HEAD'
  2. 'svnadmin dump repository > dumpfile'

These use-cases, it turns out, are nearly identical. They both simply
walk over a repository tree and push data at a vtable, which in turn
writes the data to disk (either as a working copy, or as a dumpfile).
The pools are controlled by our filesystem-walker, and we're being
*extremely* careful about controlling pool lifetimes and clearing subpools
in loops. (Gstein has pounded these concepts into our heads. :-))

  [Note: 'svn checkout http://...' works fine, because svn is doing an
   independent GET request for each file... which means a separate
   pool created and destroyed by httpd for each item.]

It turns out that the real issue here is that our current pool
implementation always grows forever, until the entire pool hierarchy
is destroyed when the main() process exits. I believe that when Mike
Pilato stat'ed our pools, we were actually only using a few
megs... but we were still creating a memory footprint of 200M+ !

** Here's the solution:

Sander whipped out two of his pool patches that have been previously
ignored up till now. He combined them into a single patch. Here's
what they do:

  1. Keep each pool's list of 'old' active blocks sorted by size. If
      a request comes in that won't fit in the current active block,
      then try to alloc the request in the largest 'old' active block
      (instead of alloc'ing a completely new active block).
      Effectively, we get some memory re-use now.

  2. When a pool is destroyed or cleared, normally all of its
      inactive blocks are given back to a 'global free list' in the
      pool hierarchy. We now have a threshold size that can be set:
      if the global-free-list ever grows larger than the threshold,
      then the 'extra' inactive blocks are actually free()d. We've
      set the maximum free-list size at 4M for now.

Anyway, I've included the patches below for people to look at.
Apparently Sander posted these before, but nobody had a reason to
care. Well, now Subversion *really* needs these features.

I've tested this patch on our two use cases: the memory footprint for
'svn checkout' bounces up and down, but eventually plateaus around
25M. The footprint for 'svnadmin dump' also bounces around, but
plateaus around 17M. A huge difference!

Sander is worried about possible pool-performance hits from these
changes, so he's hoping folks might be able to do some timing stats.
(Personally, via my sophisticated use of the unix 'time' command, I
find no noticeable difference.)

The patch affects two APR files (apr_allocator.h, apr_pools.c), and
one svn file (svn_error.c)... assuming svn users want to take
advantage of the new pool features. I've posted the patch below.

Assuming nobody objects, Sander plans to apply these changes in a day
or two. We just want to give people time to look them over.

Index: include/apr_allocator.h
===================================================================
RCS file: /home/cvs/apr/include/apr_allocator.h,v
retrieving revision 1.4
diff -u -r1.4 apr_allocator.h
--- include/apr_allocator.h 9 Apr 2002 06:56:55 -0000 1.4
+++ include/apr_allocator.h 22 May 2002 16:11:50 -0000
@@ -83,7 +83,9 @@
 
 struct apr_memnode_t {
     apr_memnode_t *next;
+ apr_memnode_t **ref;
     apr_uint32_t index;
+ apr_uint32_t free_index;
     char *first_avail;
     char *endp;
 };
@@ -150,6 +152,16 @@
  * @param allocator The allocator to get the owner from
  */
 APR_DECLARE(apr_pool_t *) apr_allocator_get_owner(apr_allocator_t *allocator);
+
+
+/**
+ * Set the current threshold at which the allocator should start
+ * giving blocks back to the system.
+ * @param allocator The allocator the set the threshold on
+ * @param size The threshold. 0 == unlimited.
+ */
+APR_DECLARE(void) apr_allocator_set_max_free(apr_allocator_t *allocator,
+ apr_size_t size);
 
 #include "apr_thread_mutex.h"
 
Index: memory/unix/apr_pools.c
===================================================================
RCS file: /home/cvs/apr/memory/unix/apr_pools.c,v
retrieving revision 1.167
diff -u -r1.167 apr_pools.c
--- memory/unix/apr_pools.c 5 May 2002 22:24:41 -0000 1.167
+++ memory/unix/apr_pools.c 22 May 2002 16:11:55 -0000
@@ -93,6 +93,8 @@
 
 struct apr_allocator_t {
     apr_uint32_t max_index;
+ apr_uint32_t max_free_index;
+ apr_uint32_t current_free_index;
 #if APR_HAS_THREADS
     apr_thread_mutex_t *mutex;
 #endif /* APR_HAS_THREADS */
@@ -166,6 +168,32 @@
     return allocator->owner;
 }
 
+APR_DECLARE(void) apr_allocator_set_max_free(apr_allocator_t *allocator,
+ apr_size_t size)
+{
+ apr_uint32_t max_free_index;
+
+#if APR_HAS_THREADS
+ apr_thread_mutex_t *mutex;
+
+ mutex = apr_allocator_get_mutex(allocator);
+ if (mutex != NULL)
+ apr_thread_mutex_lock(mutex);
+#endif /* APR_HAS_THREADS */
+
+ max_free_index = APR_ALIGN(size, BOUNDARY_SIZE) >> BOUNDARY_INDEX;
+ allocator->current_free_index += max_free_index;
+ allocator->current_free_index -= allocator->max_free_index;
+ allocator->max_free_index = max_free_index;
+ if (allocator->current_free_index > max_free_index)
+ allocator->current_free_index = max_free_index;
+
+#if APR_HAS_THREADS
+ if (mutex != NULL)
+ apr_thread_mutex_unlock(mutex);
+#endif
+}
+
 APR_INLINE
 APR_DECLARE(apr_memnode_t *) apr_allocator_alloc(apr_allocator_t *allocator,
                                                  apr_size_t size)
@@ -228,6 +256,10 @@
                 allocator->max_index = max_index;
             }
 
+ allocator->current_free_index += node->index;
+ if (allocator->current_free_index > allocator->max_free_index)
+ allocator->current_free_index = allocator->max_free_index;
+
 #if APR_HAS_THREADS
             if (allocator->mutex)
                 apr_thread_mutex_unlock(allocator->mutex);
@@ -264,6 +296,10 @@
         if (node) {
             *ref = node->next;
 
+ allocator->current_free_index += node->index;
+ if (allocator->current_free_index > allocator->max_free_index)
+ allocator->current_free_index = allocator->max_free_index;
+
 #if APR_HAS_THREADS
             if (allocator->mutex)
                 apr_thread_mutex_unlock(allocator->mutex);
@@ -299,8 +335,9 @@
 APR_DECLARE(void) apr_allocator_free(apr_allocator_t *allocator,
                                      apr_memnode_t *node)
 {
- apr_memnode_t *next;
+ apr_memnode_t *next, *freelist = NULL;
     apr_uint32_t index, max_index;
+ apr_uint32_t max_free_index, current_free_index;
 
 #if APR_HAS_THREADS
     if (allocator->mutex)
@@ -308,6 +345,8 @@
 #endif /* APR_HAS_THREADS */
 
     max_index = allocator->max_index;
+ max_free_index = allocator->max_free_index;
+ current_free_index = allocator->current_free_index;
 
     /* Walk the list of submitted nodes and free them one by one,
      * shoving them in the right 'size' buckets as we go.
@@ -316,7 +355,11 @@
         next = node->next;
         index = node->index;
 
- if (index < MAX_INDEX) {
+ if (max_free_index != 0 && index > current_free_index) {
+ node->next = freelist;
+ freelist = node;
+ }
+ else if (index < MAX_INDEX) {
             /* Add the node to the appropiate 'size' bucket. Adjust
              * the max_index when appropiate.
              */
@@ -325,6 +368,7 @@
                 max_index = index;
             }
             allocator->free[index] = node;
+ current_free_index -= index;
         }
         else {
             /* This node is too large to keep in a specific size bucket,
@@ -332,15 +376,23 @@
              */
             node->next = allocator->free[0];
             allocator->free[0] = node;
+ current_free_index -= index;
         }
     } while ((node = next) != NULL);
 
     allocator->max_index = max_index;
+ allocator->current_free_index = current_free_index;
 
 #if APR_HAS_THREADS
     if (allocator->mutex)
         apr_thread_mutex_unlock(allocator->mutex);
 #endif /* APR_HAS_THREADS */
+
+ while (freelist != NULL) {
+ node = freelist;
+ freelist = node->next;
+ free(node);
+ }
 }
 
 
@@ -544,6 +596,7 @@
     apr_memnode_t *active, *node;
     void *mem;
     char *endp;
+ apr_uint32_t free_index;
 
     size = APR_ALIGN_DEFAULT(size);
     active = pool->active;
@@ -557,17 +610,52 @@
         return mem;
     }
 
- if ((node = apr_allocator_alloc(pool->allocator, size)) == NULL) {
- if (pool->abort_fn)
- pool->abort_fn(APR_ENOMEM);
-
- return NULL;
+ node = active->next;
+ endp = node->first_avail + size;
+ if (endp < node->endp) {
+ *node->ref = node->next;
+ node->next->ref = node->ref;
+ }
+ else {
+ if ((node = apr_allocator_alloc(pool->allocator, size)) == NULL) {
+ if (pool->abort_fn)
+ pool->abort_fn(APR_ENOMEM);
+ }
+ endp = node->first_avail + size;
     }
 
- active->next = pool->active = node;
+ node->free_index = 0;
 
     mem = node->first_avail;
- node->first_avail += size;
+ node->first_avail = endp;
+
+ node->ref = active->ref;
+ *node->ref = node;
+ node->next = active;
+ active->ref = &node->next;
+
+ pool->active = node;
+
+ free_index = (APR_ALIGN(active->endp - active->first_avail + 1,
+ BOUNDARY_SIZE) - BOUNDARY_SIZE) >> BOUNDARY_INDEX;
+
+ active->free_index = free_index;
+ node = active->next;
+ if (free_index >= node->free_index)
+ return mem;
+
+ do {
+ node = node->next;
+ }
+ while (free_index < node->free_index);
+
+ *active->ref = active->next;
+ active->next->ref = active->ref;
+
+ active->ref = node->ref;
+ *active->ref = active;
+ active->next = node;
+ node->ref = &active->next;
 
     return mem;
 }
@@ -625,11 +713,13 @@
     active = pool->active = pool->self;
     active->first_avail = pool->self_first_avail;
 
- if (active->next == NULL)
+ if (active->next == active)
         return;
 
+ *active->ref = NULL;
     apr_allocator_free(pool->allocator, active->next);
- active->next = NULL;
+ active->next = active;
+ active->ref = &active->next;
 }
 
 APR_DECLARE(void) apr_pool_destroy(apr_pool_t *pool)
@@ -672,6 +762,7 @@
      */
     allocator = pool->allocator;
     active = pool->self;
+ *active->ref = NULL;
 
 #if APR_HAS_THREADS
     if (apr_allocator_get_owner(allocator) == pool) {
@@ -724,6 +815,9 @@
         return APR_ENOMEM;
     }
 
+ node->next = node;
+ node->ref = &node->next;
+
     pool = (apr_pool_t *)node->first_avail;
     node->first_avail = pool->self_first_avail = (char *)pool + SIZEOF_POOL_T;
 
@@ -791,7 +885,7 @@
 struct psprintf_data {
     apr_vformatter_buff_t vbuff;
     apr_memnode_t *node;
- apr_allocator_t *allocator;
+ apr_pool_t *pool;
     apr_byte_t got_a_new_node;
     apr_memnode_t *free;
 };
@@ -800,29 +894,70 @@
 {
     struct psprintf_data *ps = (struct psprintf_data *)vbuff;
     apr_memnode_t *node, *active;
- apr_size_t cur_len;
+ apr_size_t cur_len, size;
     char *strp;
- apr_allocator_t *allocator;
+ apr_pool_t *pool;
+ apr_uint32_t free_index;
 
- allocator = ps->allocator;
- node = ps->node;
+ pool = ps->pool;
+ active = ps->node;
     strp = ps->vbuff.curpos;
- cur_len = strp - node->first_avail;
+ cur_len = strp - active->first_avail;
+ size = cur_len << 1;
 
- if ((active = apr_allocator_alloc(allocator, cur_len << 1)) == NULL)
- return -1;
+ node = active->next;
+ if (!ps->got_a_new_node && node->first_avail + size < node->endp) {
+ *node->ref = node->next;
+ node->next->ref = node->ref;
+
+ node->ref = active->ref;
+ *node->ref = node;
+ node->next = active;
+ active->ref = &node->next;
+
+ node->free_index = 0;
+
+ pool->active = node;
+
+ free_index = (APR_ALIGN(active->endp - active->first_avail + 1,
+ BOUNDARY_SIZE) - BOUNDARY_SIZE) >> BOUNDARY_INDEX;
+
+ active->free_index = free_index;
+ node = active->next;
+ if (free_index < node->free_index) {
+ do {
+ node = node->next;
+ }
+ while (free_index < node->free_index);
+
+ *active->ref = active->next;
+ active->next->ref = active->ref;
 
- memcpy(active->first_avail, node->first_avail, cur_len);
+ active->ref = node->ref;
+ *active->ref = active;
+ active->next = node;
+ node->ref = &active->next;
+ }
+
+ node = pool->active;
+ }
+ else {
+ if ((node = apr_allocator_alloc (pool->allocator, size)) == NULL)
+ return -1;
+
+ if (ps->got_a_new_node) {
+ active->next = ps->free;
+ ps->free = node;
+ }
 
- if (ps->got_a_new_node) {
- node->next = ps->free;
- ps->free = node;
+ ps->got_a_new_node = 1;
     }
 
- ps->node = active;
- ps->vbuff.curpos = active->first_avail + cur_len;
- ps->vbuff.endpos = active->endp - 1; /* Save a byte for NUL terminator */
- ps->got_a_new_node = 1;
+ memcpy(node->first_avail, active->first_avail, cur_len);
+
+ ps->node = node;
+ ps->vbuff.curpos = node->first_avail + cur_len;
+ ps->vbuff.endpos = node->endp - 1; /* Save a byte for NUL terminator */
 
     return 0;
 }
@@ -832,10 +967,11 @@
     struct psprintf_data ps;
     char *strp;
     apr_size_t size;
- apr_memnode_t *active;
+ apr_memnode_t *active, *node;
+ apr_uint32_t free_index;
 
     ps.node = active = pool->active;
- ps.allocator = pool->allocator;
+ ps.pool = pool;
     ps.vbuff.curpos = ps.node->first_avail;
 
     /* Save a byte for the NUL terminator */
@@ -858,15 +994,48 @@
     strp = ps.node->first_avail;
     ps.node->first_avail += size;
 
+ if (ps.free)
+ apr_allocator_free(pool->allocator, ps.free);
+
     /*
      * Link the node in if it's a new one
      */
- if (ps.got_a_new_node) {
- active->next = pool->active = ps.node;
+ if (!ps.got_a_new_node)
+ return strp;
+
+ active = pool->active;
+ node = ps.node;
+
+ node->free_index = 0;
+
+ node->ref = active->ref;
+ *node->ref = node;
+ node->next = active;
+ active->ref = &node->next;
+
+ pool->active = node;
+
+ free_index = (APR_ALIGN(active->endp - active->first_avail + 1,
+ BOUNDARY_SIZE) - BOUNDARY_SIZE) >> BOUNDARY_INDEX;
+
+ active->free_index = free_index;
+ node = active->next;
+
+ if (free_index >= node->free_index)
+ return strp;
+
+ do {
+ node = node->next;
     }
+ while (free_index < node->free_index);
+
+ *active->ref = active->next;
+ active->next->ref = active->ref;
 
- if (ps.free)
- apr_allocator_free(ps.allocator, ps.free);
+ active->ref = node->ref;
+ *active->ref = active;
+ active->next = node;
+ node->ref = &active->next;
 
     return strp;
 }

Index: ./subversion/libsvn_subr/svn_error.c
===================================================================
--- ./subversion/libsvn_subr/svn_error.c
+++ ./subversion/libsvn_subr/svn_error.c Wed May 22 10:39:46 2002
@@ -25,6 +25,7 @@
 #include "apr_pools.h"
 #include "apr_strings.h"
 #include "apr_hash.h"
+#include "apr_allocator.h"
 
 #include "svn_pools.h"
 #include "svn_error.h"
@@ -423,18 +424,46 @@
 SVN_POOL_FUNC_DEFINE(apr_pool_t *, svn_pool_create)
 {
   apr_pool_t *ret_pool;
+ apr_allocator_t *allocator = NULL;
+ apr_status_t apr_err;
+
+ /* For the top level pool we want a seperate allocator */
+ if (pool == NULL)
+ {
+ apr_err = apr_allocator_create (&allocator);
+ if (apr_err)
+ abort_on_pool_failure (apr_err);
+
+ /* Use magic number for testing, 2MB */
+ apr_allocator_set_max_free (allocator, 4096 * 1024);
+ }
 
 #if !APR_POOL_DEBUG
- apr_pool_create_ex (&ret_pool, pool, abort_on_pool_failure, NULL);
+ apr_pool_create_ex (&ret_pool, pool, abort_on_pool_failure, allocator);
 #else /* APR_POOL_DEBUG */
   apr_pool_create_ex_debug (&ret_pool, pool, abort_on_pool_failure,
- NULL, file_line);
+ allocator, file_line);
 #endif /* APR_POOL_DEBUG */
 
   /* If there is no parent, then initialize ret_pool as the "top". */
   if (pool == NULL)
     {
- apr_status_t apr_err = svn_error_init_pool (ret_pool);
+#if APR_HAS_THREADS
+ {
+ apr_thread_mutex_t *mutex;
+
+ apr_err = apr_thread_mutex_create (&mutex, APR_THREAD_MUTEX_DEFAULT,
+ ret_pool);
+ if (apr_err)
+ abort_on_pool_failure (apr_err);
+
+ apr_allocator_set_mutex (allocator, mutex);
+ }
+#endif /* APR_HAS_THREADS */
+
+ apr_allocator_set_owner (allocator, ret_pool);
+
+ apr_err = svn_error_init_pool (ret_pool);
       if (apr_err)
         abort_on_pool_failure (apr_err);
     }

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Wed May 22 19:19:02 2002

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.