On Wed, Jul 10, 2013 at 9:57 PM, Greg Stein <gstein_at_gmail.com> wrote:
> On Wed, Jul 10, 2013 at 6:57 PM, Ben Reser <ben_at_reser.org> wrote:
>> I have about 160ms of latency to this server. Mostly because I'm on
>> the west coast of the US and it's in the UK someplace based on it's
>> domain name.
> Human perception is right around the 250ms mark. User interface
> designers use that metric for response time.
> In any case. In your example, with a 160ms latency, then RTT*2 is past
> that barrier. The user will notice a lag.
> That said: there may already be other lag, so RTT*2 might be 320ms of
> a 5s response time.
I'd say that there absolutely is other lag in all of these cases. If
we were always responding under the 250ms mark that you point out
above (and that I agree with), then I'd be a lot more concerned about
this change. However, every one of these cases results in a lot more
delay that that. I guess I could have measured that and measured how
long the delay was for the first result. But I'm pretty confident in
saying this change doesn't push us past a 250ms delay we're not
Consider that the log --diff example I gave. It used 91 requests, 11
of which were extra OPTION request to detect chunking, in order to
provide the log and diff of 5 revisions. It used 11 sessions and 22
connections to do so. Even without this patch log --diff noticeably
pauses on each log entry to retrieve the diff, well past a 250ms delay
when I'm using it against svn.us.apache.org which I have around 20ms
On another front I ran our test suite and timed it in several
configurations (this is with the tristate branch and its option
auto(through nginx 1.2.9)
no(through nginx 1.2.9)
Obviously latency is almost a non-issue here because everything is
local (<1ms) so what I'm really looking to show is the relative
performance of CL vs Chunked requests.
I didn't repeat these tests, so you're seeing a single result. So
there may be some variation between test runs that isn't averaged out.
As you can see though there is very little difference in these test
as far as time. The biggest difference is 23 seconds, which is just
slightly more than 1% performance degrediction between yes(direct) and
no(through nginx 1.2.9) and at least some of that comes from nginx
being in line.
Now part of the concern over CL is that we'd end up having to use temp
files. My test setup was using ramdisk for the test workspace and the
system disk is a SSD, so that might help hide some of that. Even more
importantly I believe the request needs to be over 100k before it'd
need a tempfile, so I kinda doubt our test suite exercises that much.
But so far I'm not seeing any major performance penalty in using CL
instead of chunked requests. Obviously in the future that may become
Received on 2013-07-11 19:43:22 CEST