On Thu, Dec 20, 2012 at 5:08 AM, Branko Čibej <brane_at_wandisco.com> wrote:
> On 20.12.2012 02:08, Stefan Fuhrmann wrote:
> > The ineffectiveness of our use of memcached in 1.6 had
> > prompted the development of membuffer in the first place.
> > Despite the relevant APR bug that got fixed only recently,
> > there are fundamental limitations compared to a SHM-based
> > implementation:
> > * Instead of reading directly addressable memory, memcached
> > requires inter-process calls over TCP/IP. That translated into
> > ~10us latency. The performance numbers I found (~200k
> > requests/s on larger SMP machines) are 1 order of magnitude
> > less than what membuffer achieves with 1 thread.
> What kind of latency do you expect when you share this cache amongst
> several processes that have to use some other kind of RPC and/or locking
> to access the shared-memory segment? I'm ignoring marshalling since it
> costs the same in both cases.
Read lock is expected to be ~10ns in typical cases,
write lock would be 100 .. 200ns (in case no wait is required).
IOW, the latency is more or less the same as we have today
with our in-process caches.
No RPC is required once the connection to the shared mem
cache has been set up. The cache server only:
* creates & initializes the cache memory
* provides a registry for cache clients
* periodically checks for dead clients (to release zombi locks)
> > * More critically, memcached does not support for partial data
> > access, e.g. reading or adding a single directory entry. That's
> > 1.6-esque O(n^2) instead of O(n) runtime for large folders.
> That's an issue of cache layout. But I concede the point since it's a
> time vs. space decision.
Certified & Supported Apache Subversion Downloads:
Received on 2012-12-20 22:12:50 CET