We've decided to do a code review of how we're managing pools.
Specifically, at last week's meetings we decided that any sort of loop
should look like this:
create subpool
loop: {
use subpool
clear subpool
}
destroy subpool
Which is different than what a lot of directory-descending 'drivers'
are doing right now:
recursive_dir_walker (pool)
{
create subpool in pool
read_all_entries (subpool)
loop over entries: {
use subpool
recursive_dir_walker (subpool)
}
destroy subpool
}
In addition, we've also decided that per-object pools are unnecessary
-- at least that's what gstein has been saying all along. In theory,
we should never need to have "close_foo()" or "free_foo()" functions
in any of our APIs. Instead, we should let the caller manage memory
by allowing objects to be allocated in specific pools, and then making
object lifetime decisions by just clearing/destroying pools.
The question at hand is about the editor interface: it seems to break
this rule. The caller hands one pool to get_editor(), which then gets
stashed in the edit_baton. From there on, the editor does all of its
own internal memory management.
Jim and I just talked about this, and are wondering if this isn't a
flaw in the editor API. Perhaps each editor function should be like
our other interfaces -- take a pool in which to allocate (return) a
dir or file baton.
And along this line of thought, perhaps the editor funcs might take a
*2nd* pool for doing 'scratchwork'; think about it. Every
editor-driver is going to be looping over entries and will have a
'scratch' pool per-iteration -- why not let the editor use it too?
The lifetimes match up. It's kind of redundant to have the editor
creating and destroying a scratch pool internally.
Thoughts?
Received on Sat Oct 21 14:36:28 2006