[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Hashtable data freed before pool cleanup

From: Ruediger Pluem <rpluem_at_apache.org>
Date: Thu, 07 Jan 2010 16:26:20 +0100

On 07.01.2010 16:12, Bert Huijben wrote:
> Hi,
>
>
>
> Looking at apr's 1.3.x branch:
>
>
>
> In r817810 a change to the hash table implementation was introduced, which
> moves the data of a hash table from the main pool to a subpool (as seen from
> the pool passed to apr_hash_make). This makes the contents of the hashtable
> unavailable from pool cleanup handlers registered on that same pool.
> (subpools are cleared before the cleanup handlers of the pool itself)
>
>
>
> For Subversion this is a breaking change as we commonly use pool cleanup
> handlers which read hashtables. (E.g. we close the open data files by
> iterating a hash in the cleanup handler)
>
>
>
> Switching to this apr version (once released) will make our code use
> pointers to already freed memory.
>
>
>
> A valid alternative would be to allocate a subpool in the parent pool of the
> pool passed to apr_hash_make, and clearing/destroying that pool via a
> cleanup handler once the passed pool is cleared. (Or just keeping the old
> behavior of realocating in the same pool).

This sounds like a valid point.

I would propose to revert r817810 / r817809 (for 1.3.x / 1.4.x) and only
keep r817806 (trunk). Graham?
IMHO we can backport this again later if the problem is sorted out in trunk.

Regards

Rüdiger
Received on 2010-01-07 19:17:18 CET

This is an archived mail posted to the Subversion Dev mailing list.