[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: [PATCH] Hooks

From: Philip Martin <philip_at_codematters.co.uk>
Date: 2003-09-05 19:16:39 CEST

"D.J. Heap" <dj@shadyvale.net> writes:

> I did add the 'unclosed files and pipes will be handled at pool
> cleanup'

Oops, sorry, missed that.

> (which is probably not clear enough about exactly what is left
> around), and each hook is called only once on a commit, so it's not
> going to leave much around for long.

Commits can take a non-trivial amount of time. Between running a hook
and pool cleanup, are there any places where we may have to wait for
the network?

> But since they are basically
> temporary for this function, it does seem cleaner as you outlined
> below (and follows the HACKING guidelines better, as I understand
> them).

I haven't looked at the lifetime of txn->pool, how long does it last?

> I'm wondering, though, how error conditions are supposed to be handled
> -- in some areas it seems error conditions mean it's ok to exit
> without explicit cleanup -- ie, creating a subpool for temp stuff but
> then using the SVN_ERR macro for error handling (in libsvn_repos/log.c
> the detect_changed function, for example). But in other areas, error
> handling is more explicit and cleanup of temps is done as much as
> possible before exiting -- like this function used to be (which tends
> to make the code messy as you noted with the previous patch).

Some of the existing error handling isn't correct. The run_hook_cmd
function being an obvious example, in some places it explicitly cleans
up but in other places it bypass the cleanup, it's inconsistent.

In general SVN_ERR is OK, at least as far as memory goes. At some
stage the error will be handled and at it seems reasonable to require
that the code that handles the error should clear any related pools.
Non-memory resources require more thought.

> Is the more explicit cleanup desired in this area because it's dealing
> with pipes and files, or was it just the way it was done at the time
> or something? Is it ok to create a subpool and use SVN_ERR to bail on
> error conditions, or should I explicitly clear it before exiting if
> any errors are encountered so that the files/pipes are guaranteed
> closed even in the face of errors?

Yes, it's the open files that are a particular worry. This function
gets called by mod_dav_svn so it could be part of a multi-threaded
server process. When that happens there may be multiple commits
happening simultaneously in the same process, how many open files are
going to exist simultaneously? What sort of limits exist?

I suspect that the OS limit will usually be sufficiently high that
this is not a problem in practice. It's possible I'm worrying about a
problem that doesn't exist.

> I'm not trying to be a bother, just clarify my understanding of how
> thing should be done.

Given the confused state of the current run_hook_cmd code, it
may well be that your current patch is enough of an improvement to
warrant committing it. I'm not vetoing it, I don't know for certain
there is a problem, I'm just asking questions.

-- 
Philip Martin
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Fri Sep 5 19:17:28 2003

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.