[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Optimizing back-end: saving cycles or saving I/O (was: Re: [PATCH v2] Saving a few cycles, part 3/3)

From: Johan Corveleyn <jcorvel_at_gmail.com>
Date: Wed, 12 May 2010 01:16:29 +0200

On Tue, May 11, 2010 at 1:56 PM, Stefan Sperling <stsp_at_elego.de> wrote:
> On Tue, May 11, 2010 at 07:43:33AM -0400, Mark Phippard wrote:
>> On Tue, May 11, 2010 at 7:27 AM, Stefan Sperling <stsp_at_elego.de> wrote:
>> > On Tue, May 11, 2010 at 01:36:26AM +0200, Johan Corveleyn wrote:
>> >> As I understand your set of patches, you're mainly focusing on saving
>> >> cpu cycles, and not on avoiding I/O where possible (unless I'm missing
>> >> something). Maybe some of the low- or high-level algorithms in the
>> >> back-end can be reworked a bit to reduce the amount of I/O? Or maybe
>> >> some clever caching can avoid some file accesses?
>> >
>> > In general, I think trying to work around I/O slowness by loading
>> > stuff into RAM (caching) is a bad idea. You're just taking away memory
>> > from the OS buffer cache if you do this. A good buffer cache in the OS
>> > should make open/close/seek fast. (So don't run a windows server if
>> > you can avoid it.)
>> >
>> > The only point where it's worth thinking about optimizing I/O
>> > access is when you get to clustered, distributed storage, because
>> > at that point every I/O request translated into a network packet.
>>
>> You had me until that last part.  I think we should ALWAYS be thinking
>> about optimizing I/O.  I have little doubt that is where the biggest
>> performance bottlenecks live (other than network of course).  I agree
>> that making a big cache is probably not the best way to go, but I
>> think we should always be looking for optimizations where we avoid
>> repeated open/closes that are not necessary.
>
> That's true. Avoiding repeated open/close of the same file
> is a good optimisation. Even with a good buffer cache it will
> make a difference.
>
> So s/The only point/One point/ :)

Yes, some form of caching may or may not be a good approach, but the
main point is that ideally, for a certain client request, every
interesting rev file should be opened and read exactly once. Currently
this is definitely not the case (for "svn log" it's closer to 10
opens/closes and 5 times the amount of bytes of every rev file
involved; with packed revs it's even worse because of the extra lookup
of the rev offset in the pack manifest file).

Maybe the ideal situation is currently impossible because of the
higher level algorithms (retrieving the data in a certain way). So
with "some clever caching" I really meant "read the rev file (or the
interesting parts of it) exactly once, and keep that in memory for
those 10 other accesses you need (which follow very shortly), then
forget about it". Not some sort of LRU cache or something. But I
really don't know whether this is a good idea (it may be difficult to
determine when you don't need it anymore, ...). Just guessing ...

In my book, I/O is almost always one of the slowest parts, even if the
data is on a local 15k rpm or even SSD disk. Since SVN with FSFS does
so much I/O with potentially thousands of little files for a single
client request, I think it could pay off big time to try to reduce it
as much as possible.

>> I think it is extremely common that our customers have their
>> repositories on NFS-mounted or SAN storage.  While these often have
>> fast disk subsystems there is still a noticeable penalty for file
>> opens.  Have you looked at Blair's wiki before?
>>
>> http://www.orcaware.com/svn/wiki/Server_performance_tuning_for_Linux_and_Unix

Thanks, very interesting read indeed. I'll try some of those suggestions.

And yes, we're in that exact situation: FSFS backend on an NFS-mounted
SAN (for some good reasons). All clients on the same LAN, so network
is not a bottleneck.

A couple of weeks ago I did some performance testing, comparing with a
back-end on a local SSD disk:
- log: ~9 times faster
- blame: ~5 times faster (if the client is fast enough)
- checkout: ~2 times faster (if the client's I/O is fast enough (wc-1))
- update: ~1,5 times faster (ditto)
(haven't tested merge)

Ok, this is an extreme comparison (NFS vs local SSD), but it
illustrates the dependency on I/O.

I think the differences for checkout and update (and probably merge)
will become larger and more apparent with wc-ng (less bottlenecks on
the client).

Cheers,

-- 
Johan
Received on 2010-05-12 01:16:56 CEST

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.