[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: fork/exec for hooks scripts with a large FSFS cache

From: Greg Stein <gstein_at_gmail.com>
Date: Wed, 14 Nov 2012 14:37:59 -0500

On Wed, Nov 14, 2012 at 8:49 AM, Philip Martin
<philip.martin_at_wandisco.com> wrote:
>...
> Having hooks run in a separate process is complicated. The process
> would need to be multi-threaded, or multi-process, to avoid hooks
> running in serial. stdin/out/err would need to be handled
> somehow. Pipes perhaps? By passing file descriptors across a Unix
> domain socket?

We could do whatever mod_cgid is doing.

But with that said: most hooks don't generate stdout or stderr. We
could ship over parameters and a stdin blob, and run the hook. This
simplified model would only work if it was acceptable to *not* return
stdout/err to the client. (anything could still be logged on the
server)

You don't really need multiprocess or multithread, if you run an async
network loop such as serf does. The child exit signal would pop the
network loop, allowing for examination of the result. The daemon would
get a hook request, fork/exec, and return the exit code. (heck, if the
stdout/err is "small enough", it could be captured and returned in the
response)

IIRC, Apache httpd even has a subsystem to monitor these kinds of
daemons and keep them running.

Cheers,
-g
Received on 2012-11-14 20:38:33 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.