[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: HTTP protocol v2: rethunk.

From: Mark Mielke <mark_at_mark.mielke.cc>
Date: Thu, 06 Nov 2008 12:38:21 -0500

Greg Hudson wrote:
> On Thu, 2008-11-06 at 09:18 -0600, Ben Collins-Sussman wrote:
>> I'd have to defer this question to the serf experts. There's an
>> unspoken assumption that saturating the pipe with parallel (and/or
>> pipelined) requests is always a speed gain, but I actually don't know
>> of evidence to back this up. Maybe it's out there, though.
> So, ignoring HTTP, the theory is that if you want to transfer a big file
> really fast, you should open like ten TCP connections and transfer
> tenth-sized chunks of it in parallel?
> I really doubt that's any faster, and if it ever is, it's only because
> you're being a bad network citizen and grabbing a bigger share of a
> congested pipe.

Pipelining means that you don't start and stop your pipe. This
start/stop doesn't make you a better citizen, it just means you idle on
each end of the pipe more often.

Parallel is open to debate. Are bittorrent users bad citizens? The net
neutrality defenders will say no. It's my data, and it's my network path
that I pay for. Nobody should care what I do with it. I tend to think
it's bad myself. But ten TCP connections with 1/10 each is not a
benefit. Similar to pipelining, the goal is eliminate and start/stops.
By saturating the pipe from multiple sources, even if one source stops
(say, to read a block from disk) the other sources continue. Also, I
don't think it scales to 10 unless it's a very bad network. 2 or 3
should be optimal enough to saturate a pipe continuously.


Mark Mielke <mark_at_mielke.cc>
Received on 2008-11-06 18:38:38 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.