[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: svn commit: r1586391 - in /subversion/branches/thunder: BRANCH-README notes/thundering-herd.txt

From: Branko Čibej <brane_at_wandisco.com>
Date: Fri, 11 Apr 2014 06:21:27 -0600

On 11.04.2014 02:24, Daniel Shahaf wrote:
> stefan2_at_apache.org wrote on Thu, Apr 10, 2014 at 18:08:46 -0000:
>> +++ subversion/branches/thunder/notes/thundering-herd.txt Thu Apr 10 18:08:45 2014
>> @@ -0,0 +1,50 @@
>> +The Problem
>> +-----------
>> +
>> +In certain situations, such as the announcement of a milestone being
>> +reched, the server gets hit by a large number of client requests for
>> +the same data. Since updates or checkouts may take minutes, more of
>> +the same requests come in before the first ones complete. The server
>> +than gets progressively slowed down.
> Does this have to be solved in svn?
>
> The solution sounds like you are reinventing the OS' disk cache.
> Wouldn't it be better to make a syscall to hint the cache that "<these>
> files are going to be a hot path for the next few minutes", so the OS
> can then consider the size of the files v. the size of the CPU caches
> and other processes' needs and make its own decisions?
>
> I'm not against having this branch, I just don't immediately see why
> this is the best solution to the described problem.

I'd normally agree ... I'm pretty sure that in the described scenario,
the disk cache is already as hot as it'll ever get. The fact is that
that the FS layer caches data that are derived from disk contents in a
rather expensive way, and no amount of disk caching can amortize that
expense.

However: I would argue that, long term, effort is better invested into
changing the way we store data on disk to lower or, dare I say,
eliminate the expense of deriving it. Currently the cached data falls
into two distinct groups:

  * metadata (directories and nodes)
  * content (fulltext (and deltas, I think?)

Retrieving the fulltext of a file is in general going to be expensive,
because of deltification; but the thundering-herd effect isn't all that
important for file contents, compared to metadata, since the latter gets
the lions' share of accesses.

Currently, the cost of retrieving metadata is directly dependent on the
cost of retrieving content, since we naïvely store directories the same
way as file contents. I'd argue that effort is better spent on
implementing the new metadata index design which makes much better use
of disk (or rather, database page) caching.

FWIW, I agree with Ivan that the proposed block/retry approach leaves me
less than enthusiastic ... I don't think it's even provable that it
cannot cause deadlocks, because the blocking order depends on access
patterns driven by clients that we have no control over. OK, nothing
will actually deadlock since the locks are designed to expire; but I can
easily imagine scenarios where the blocking actually slows things down
overall.

Limiting the number of allowed parallel requests on the HTTPd and
svnserve level is a much better approach, IMO, and can be tweaked by the
admin depending on server capabilities. I don't think anyone's actually
ever researched how many parallel requests the repository can actually
handle in these situations. I suspect looking into that would be time
better spent.

-- Brane

-- 
Branko Čibej | Director of Subversion
WANdisco // Non-Stop Data
e. brane_at_wandisco.com
Received on 2014-04-11 14:22:17 CEST

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.