[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Sharded FSFS repositories - summary

From: Malcolm Rowe <malcolm-svn-dev_at_farside.org.uk>
Date: 2007-03-13 15:34:45 CET

On Tue, Mar 13, 2007 at 01:47:06PM +0100, Ph. Marek wrote:
> On Tuesday 13 March 2007 13:00, Malcolm Rowe wrote:
> > - We'll create shards of 4000 entries each. That's large enough that
> > someone would have to hit 16M revisions before a larger value would be
> > an improvement, but small enough that it has reasonable performance
> > (and support) on all filesystems that I'm aware of. It's also a
> > power-of-ten, so easier for humans to understand.
> 4000 is no (integer) power of ten, so would not really be better.

Okay, I didn't use the right word :-) It's still easier for humans to
grasp how a range from 420000-423999 works than one that ranges from
417792-421887.

> I'd prefer to have a *real* (integer :-) power of ten, eg. 1000. And TBH, 4000
> is a bit too much (for me, at least) - 1000 would be high, but acceptable.
> (I'd really prefer 100 and 3 or 4 levels - but I seem to be alone with that.)

The change is for two main reasons:
 - making it work on filesystems that have a maximum number of entries
   per directory (e.g. some filesystems, some NAS boxes).
 - slightly improving performance on filesystems with sub-optimal handling
   of many files in one directory (typically not until you hit 100k revs
   or so).

There are other advantages too:
 - reducing spew when administrators list the contents of the revs/ directory.
 - allowing blocks of revisions to be moved off in one go.

But neither of those are the main reason to do this. (realistically,
how many times a month will a typical admin do an 'ls' in revs/ ?)

4000 revs is a good compromise: it's big enough that it scales to large
repositories (ASF's repository would be halfway towards needing another
level if we went with 1000 files-per-shard), and it's small enough that
it works everywhere we need it to (even on Coda, it seems :-)).

It doesn't look like multi-level trees would be needed for performance
until you hit somewhere around c.100M revisions, and I'm not aware of
anyone who's anywhere near that level yet :-)

> If this number would go into an existing file (format, fs-type), it would not
> require another read;

Sure, but it's the complexity that concerns me - we really need to
demonstrate a tangible benefit to make it that much more complex.

> and if we allowed not one, but two such numbers here,
> the repository could be re-arranged on-line.
> (The fs-layer had to be looking for both files until one was found - as was
> already recommended).

Why would you ever need to do something like that?

(What Karl recommended was online conversion from the 'flat' to
'sharded' scheme, which I still think is too complex for the slight
benefit [of having a slightly faster upgrade] it gives).

> Have you seen my mail regarding the transaction-directories? Maybe the naming
> there could be done with the same function.
>

They could, but how frequently do you commit transactions with 100,000
changed files? Maybe on an initial import, but in that case the time
spent writing the data is going to dwarf the time spent looking up the
entries, or at least that's my intuition. You're quite welcome to
benchmark the difference to see what it actually is.

Regards,
Malcolm

  • application/pgp-signature attachment: stored
Received on Tue Mar 13 15:35:24 2007

This is an archived mail posted to the Subversion Dev mailing list.