[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: fork/exec for hooks scripts with a large FSFS cache

From: Philip Martin <philip.martin_at_wandisco.com>
Date: Wed, 14 Nov 2012 13:49:25 +0000

Stefan Fuhrmann <stefan.fuhrmann_at_wandisco.com> writes:

> <philip.martin_at_wandisco.com>wrote:
>>
>> Perhaps we could start up a separate hook script process before
>> allocating the large FSFS cache and then delegate the fork/exec to that
>> smaller process?
>>
>
> I wonder whether there is a way to pass a different
> cache setting to the sub-process.

I don't think this would work. It's the fork that is failing, the child
process never exists so it cannot use a smaller memory footprint.

Having hooks run in a separate process is complicated. The process
would need to be multi-threaded, or multi-process, to avoid hooks
running in serial. stdin/out/err would need to be handled
somehow. Pipes perhaps? By passing file descriptors across a Unix
domain socket?

For now I think we just have to recommend that the system has sufficient
swap for the fork to work. Once the child execs the hook the memory
footprint of the process goes down. So as far as I can tell on my Linux
system nothing gets written to swap, it just has to exist when fork is
called.

-- 
Certified & Supported Apache Subversion Downloads:
http://www.wandisco.com/subversion/download
Received on 2012-11-14 14:50:15 CET

This is an archived mail posted to the Subversion Dev mailing list.