On 20.12.2012 02:08, Stefan Fuhrmann wrote:
> The ineffectiveness of our use of memcached in 1.6 had
> prompted the development of membuffer in the first place.
> Despite the relevant APR bug that got fixed only recently,
> there are fundamental limitations compared to a SHM-based
> * Instead of reading directly addressable memory, memcached
> requires inter-process calls over TCP/IP. That translated into
> ~10us latency. The performance numbers I found (~200k
> requests/s on larger SMP machines) are 1 order of magnitude
> less than what membuffer achieves with 1 thread.
What kind of latency do you expect when you share this cache amongst
several processes that have to use some other kind of RPC and/or locking
to access the shared-memory segment? I'm ignoring marshalling since it
costs the same in both cases.
> * More critically, memcached does not support for partial data
> access, e.g. reading or adding a single directory entry. That's
> 1.6-esque O(n^2) instead of O(n) runtime for large folders.
That's an issue of cache layout. But I concede the point since it's a
time vs. space decision.
Director of Subversion | WANdisco | www.wandisco.com
Received on 2012-12-20 05:09:11 CET