Greg Hudson <email@example.com> writes:
> So, the grand solution to error lifetimes seems to be: when you create
> the top-level pool in a program, a special error pool is created as a
> sub-pool, and error memory is allocated within the error pool.
Actually, an error is always allocated within a subpool of the grand
error pool, but basically you've got the idea, yeah.
> Does the error subpool ever get cleared? It doesn't look like it.
> That's a memory leak situation.
It gets cleared if the top-level pool gets cleared. It's up to
whoever created that pool to clear it -- generally, there will be one
pool associated with each `request' (for the appropriate meaning of
`request' in a given context).
So it's not a memory leak situation, it's just that you can't tell
from within a library when that instantiation's memory gets freed.
This is the way it has to be, I think: a library that provides
interfaces which return errors can't know what the lifetimes of those
errors should be, only the library's user can know.
(Note: there are occasional places where the library will free errors
explicitly -- errors that it knows should not be reported back to the
caller. E.g., when checking for a lockfile, don't return an error
until max_lockout_time time has passed.)
Received on Sat Oct 21 14:36:08 2006