[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Fulltext caching and server (non-)scalability

From: Stefan Fuhrmann <stefan.fuhrmann_at_wandisco.com>
Date: Mon, 14 Apr 2014 16:05:01 +0200

Hi all,

This is nothing new (introduced way back when in 1.6),
but I have only become aware of it during my recent
scalability testing. I think I can fix that but that won't
happen until the end of next week or so.

Scenario: Large files have been cached and now a
number of clients request these files. The critical bit
here is N clients requesting say a 1GB file each.
ra_serf may request 2 of these. It may be the same
or different files.

What happens: N x 1GB is fetched from cache, each
held in some buffer and streamed out from there.

Problem: How many of those GB-sized buffers can
your server hold before going OOM?

Solution: Fetch only chunks of a configurable size
(default 16MB) from caches that support it and limit
other caches (memcached) to 1 chunk max. per file.
Fall back to standard reconstruction from deltas
when data gets evicted from cache while some chunks
have not been delivered, yet. This can be hidden
behind our existing stream interface.

-- Stefan^2.
Received on 2014-04-14 16:05:36 CEST

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.