On 11.04.2011 12:15, Philip Martin wrote:
> Mark Phippard<markphip_at_gmail.com> writes:
>
>> Was reading the release notes on the new in-memory cache. How does
>> this work (provide benefits) with mod_dav_svn? It seems like the
>> cache is per-process. Aren't the processes (pre-fork MPM) with DAV
>> generally short-lived? If I checkout trunk and someone else
>> immediately checks out trunk after me do they get some kind of caching
>> benefit from my checkout?
> Apache with the pre-fork MPM is a set of single threaded processes, with
> the worker MPM it is a set on multi-threaded processes. The lifetime of
> each process is configurable, it's possible to configure a new process
> for every connection (or every http request if keepalive is not enabled)
> but performance suffers so I think it is normal for each process to
> handle hundreds, or thousands of requests.
>
> The cache code is new but in-memory caching is not, to a certain extent
> the current code replaced the previous in-memory cache.
>
As explained by Philip, short-lived server processes will affect
membuffer cache effectiveness. The highest gains will be seen
on single-process multi-threaded servers, e.g. the Windows
default MPM.
However, there are more aspects / improvements done to the
caching code since 1.6:
* cache virtually all data such that non-trivial server requests like
reports already reach significant hit rates even on the first run
(certain info is being used over and over again).
* support access of sub-items, e.g. individual entries in a cached
directory structure. Reduces directory iteration from O(n^2) to O(n).
* changing the API from "duplication" for inprocess caches to
a very efficient (de-)serialization approach. Writing to the cache
is about as fast as before but reading is up to 5 times as fast.
* cache-memserve, if configured at all, will now be used for full-text
caching only because the communication overhead would more
than offset the performance gains for the other structures
* cache-membuffer has a strict memory usage limit (as opposed
to cache-inprocess)
* the new serializer should slightly reduce memory consumption
compared to simple duplicating data structures, i.e. more data
fits into the cache
-- Stefan^2.
Received on 2011-04-11 23:09:29 CEST