[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: svn commit: r1495419 - in /subversion/trunk/subversion/libsvn_ra_serf: options.c ra_serf.h serf.c util.c

From: Ben Reser <ben_at_reser.org>
Date: Thu, 11 Jul 2013 10:42:41 -0700

On Wed, Jul 10, 2013 at 9:57 PM, Greg Stein <gstein_at_gmail.com> wrote:
> On Wed, Jul 10, 2013 at 6:57 PM, Ben Reser <ben_at_reser.org> wrote:
>> I have about 160ms of latency to this server. Mostly because I'm on
>> the west coast of the US and it's in the UK someplace based on it's
>> domain name.
>
> Human perception is right around the 250ms mark. User interface
> designers use that metric for response time.

[...]

> In any case. In your example, with a 160ms latency, then RTT*2 is past
> that barrier. The user will notice a lag.
>
> That said: there may already be other lag, so RTT*2 might be 320ms of
> a 5s response time.

I'd say that there absolutely is other lag in all of these cases. If
we were always responding under the 250ms mark that you point out
above (and that I agree with), then I'd be a lot more concerned about
this change. However, every one of these cases results in a lot more
delay that that. I guess I could have measured that and measured how
long the delay was for the first result. But I'm pretty confident in
saying this change doesn't push us past a 250ms delay we're not
already past.

Consider that the log --diff example I gave. It used 91 requests, 11
of which were extra OPTION request to detect chunking, in order to
provide the log and diff of 5 revisions. It used 11 sessions and 22
connections to do so. Even without this patch log --diff noticeably
pauses on each log entry to retrieve the diff, well past a 250ms delay
when I'm using it against svn.us.apache.org which I have around 20ms
latency to.

On another front I ran our test suite and timed it in several
configurations (this is with the tristate branch and its option
meanings):
auto(through nginx 1.2.9)
real 30m41.060s
user 15m14.803s
sys 12m34.307s

no(through nginx 1.2.9)
real 30m50.127s
user 15m16.974s
sys 12m37.019s

yes(direct)
real 30m23.280s
user 15m13.347s
sys 12m27.483s

auto(direct)
real 30m35.522s
user 15m15.198s
sys 12m29.660s

no(direct)
real 30m22.735s
user 15m14.443s
sys 12m30.666s

Obviously latency is almost a non-issue here because everything is
local (<1ms) so what I'm really looking to show is the relative
performance of CL vs Chunked requests.

I didn't repeat these tests, so you're seeing a single result. So
there may be some variation between test runs that isn't averaged out.
 As you can see though there is very little difference in these test
as far as time. The biggest difference is 23 seconds, which is just
slightly more than 1% performance degrediction between yes(direct) and
no(through nginx 1.2.9) and at least some of that comes from nginx
being in line.

Now part of the concern over CL is that we'd end up having to use temp
files. My test setup was using ramdisk for the test workspace and the
system disk is a SSD, so that might help hide some of that. Even more
importantly I believe the request needs to be over 100k before it'd
need a tempfile, so I kinda doubt our test suite exercises that much.

But so far I'm not seeing any major performance penalty in using CL
instead of chunked requests. Obviously in the future that may become
more significant.
Received on 2013-07-11 19:43:22 CEST

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.