On Mon, Nov 05, 2012 at 02:54:07PM +0100, Stefan Fuhrmann wrote:
> On Sun, Nov 4, 2012 at 10:40 AM, Stefan Sperling <stsp_at_elego.de> wrote:
> > I just came across something that reminded me of this thread.
> > It seems PostgreSQL is doing something quite similar to what we
> > want to do here:
> >
> > When the first PostgreSQL process attaches to the shared memory segment,
> > it
> > checks how many processes are attached. If the result is anything other
> > than
> > "one", it knows that there's another copy of PostgreSQL running which is
> > pointed at the same data directory, and it bails out.
> > http://rhaas.blogspot.nl/2012/06/absurd-shared-memory-limits.html
> >
>
> IIUIC, the problems they are trying to solve are:
>
> * have only one process open / manage a given data base
> * have SHM of arbitrary size
>
> Currently, we use named SHM to make the value of
> two 64 bit numbers per repo visible to all processes.
> Also, we don't have a master process that would
> channel access to a given repository.
>
> The "corruption" issue is only about how to behave
> if someone wrote random data to one of our repo
> files. That's being addressed now (don't crash, have
> a predictable behavior in most cases).
>
> > If this works for postgres I wonder why it wouldn't work for us.
> > Is this something we cannot do because APR doesn't provide the
> > necessary abstractions?
> >
>
> The postgres code / approach may be helpful when
> we try to move the whole membuffer cache into a
> SHM segment.
Ah, I see.
Next question: Why don't we use a single SHM segment for the revprop cache?
Revprop values are usually small so mapping a small amount of memory
would suffice. And using a single SHM segment would make updated values
immediately visible in all processes, wouldn't it? And we wouldn't need the
generation number dance to make sure all processes see up-to-date values.
Whichever process updates a revprop value would update the corresponding
section of shared memory.
Received on 2012-11-05 15:28:53 CET