On Sun, 2005-11-06 at 17:43 -0800, Greg Stein wrote:
> On Mon, Oct 31, 2005 at 12:45:48PM -0700, Garrett Rooney wrote:
> > On 10/29/05, Daniel Berlin <dberlin@dberlin.org> wrote:
> >
> > > 1. Anybody who doesn't have enough memory to hold 128 dirs of their repo
> > > in memory is probably in trouble anyway. Assuming 100k of info per dir,
> > > that's only ..... 12.8 meg of memory, *if they hit all the dirs*, and
> > > they had probably about 1000 files per dir (to generate 100k of info).
> > > This seems reasonable to me.
> >
> > This seems like a reasonable tradeoff for the speed gain, although
>
> Is it reasonable? Adding 12.8 meg in absolute terms isn't a big deal.
> Now do it for 100 concurrent sessions.
>
If you look at the numbers above, you'll note this would take *100k* of
dirents per directory, and the hash being perfect, and them having at
least 128 dirs of 1000 files, and every concurrent session hitting every
single one of those directories, at the same time.
That's a lot of ifs.
> Oops.
Not so much.
The min footprint of svnserve on gcc.gnu.org was around 20 meg virt per
session, and 12 meg real before this change. So in a completely
ridiculous scenario, your footprint is now 30 meg virt and 24 meg real.
So before you needed 1.2 gig of memory, and now you need 2.4, again, in
this ridiculous scenarios.
IOW, it's not like i've increase the footprint from 3 meg to 2400 meg.
In real usage, all 20 concurrent sessions of svnserve going on
gcc.gnu.org right now are at .... 12.5 meg real.
Thus, we could probably push the number down to 32 and still get the
same speedup, at least for gcc. In the completely ridiculous scenario
above, that would produce an extra 3 meg footprint per client. Would
that satisfy you?
>
> Individual connection footprints are just one piece of the puzzle.
> They *still* have to stay tiny so that a given server can service more
> than three clients.
Of course. If the cost of serializing and unserializing dirents wasn't
(IE there was some magic self-contained format to them), i would have
just set up memcached, memcached this (because all these svnserves can
share the dirents), and been done with it. However, i was doing this
because we were spending more time processing the directories again and
again than
I believe we have more scalability issues you need to worry about with
serving 100 concurrent sessions than the size of the dirent cache.
Honestly, if you want to play the "oh, we need to serve 100 concurrent
sessions at once" game, here's some real data:
oprofiling on the server with 20 concurrent sessions going shows that
most of the server time is being spent:
1. md5'ing every single darn thing read from fsfs, every single time
it's read. I've since hardcoded this off in our fsfs, instead having it
do it in svnadmin verify. This was eating about 40% of the CPU time
spent per session on the server
2. svn_stream_readline to read the hash tables for directories and
revprops, again and again and again and again (still, because every
svnserve instance it). On gcc.gnu.org, this is actually cpu bound, but
still wastes about >20% of the cpu time per session, which is quite
high. When it *does* hit the disk, it's ridiculously slow, because it
uses a 1 byte buffer.
If you want to get the memory footprint down, and get better
performance, make it so we don't *have* to cache so much data to get any
kind of performance. Personally, i think it's somewhat funny that we
are spending most of the time processing revisions *hunting for newline
characters* in serialized hash tables
This change was a last resort, trust me.
I spent a lot of time thinking about whether i could avoid these dir
reads in the first place, and hunting down callpaths to see if they
really needed to be doing what they were doing.
No dice.
For 1.4, I think we need to do something about the serialized
hashtables. Trivially, you could just prepend an encoded integer to
each line instead of using something that goes searching for newline.
You could make up for the small increase n space usage by changing the
current serialization of lengths into variable size encoded ints instead
of written strings. Or compressing the hashtables, which would also
help i/o for large dirs at a very small cpu time cost (zlib is quite
efficient for blocks of this size).
Any of this will completely destroy any compatibility with the current
fsfs format, however.
Sadly, i'm not sure there *is* any way to keep compatibility while still
giving it the information to avoid svn_stream_readline.
(This was another reason i'm in favor of featurizing the fs. We'd just
set "hash3" so that we knew how the hashtables should be stored)
> I've seen CVS take several hundred meg. Hey, no problem. The server
> had 4 gig of memory. Hah! It wasn't so fun as the server went into
> swap death when a half-dozen people tried to update their client.
Without my change, it would have gone into i/o and cpu bound death
anyway long before. Just FYI :)
>
> (and yes, I see this has been committed, but I still think there is
> concern around adding to the per-connection footprint like this)
>
> Cheers,
> -g
>
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Mon Nov 7 04:16:13 2005