On Thu, Nov 6, 2008 at 10:29 AM, Greg Hudson <ghudson_at_mit.edu> wrote:
> On Thu, 2008-11-06 at 09:18 -0600, Ben Collins-Sussman wrote:
>> I'd have to defer this question to the serf experts. There's an
>> unspoken assumption that saturating the pipe with parallel (and/or
>> pipelined) requests is always a speed gain, but I actually don't know
>> of evidence to back this up. Maybe it's out there, though.
>
> So, ignoring HTTP, the theory is that if you want to transfer a big file
> really fast, you should open like ten TCP connections and transfer
> tenth-sized chunks of it in parallel?
>
> I really doubt that's any faster, and if it ever is, it's only because
> you're being a bad network citizen and grabbing a bigger share of a
> congested pipe.
I believe the assumption is that you'd actually be making pipelined
requests over a smaller number of TCP connections, not actually
opening 10 separate ones.
-garrett
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe_at_subversion.tigris.org
For additional commands, e-mail: dev-help_at_subversion.tigris.org
Received on 2008-11-06 16:35:43 CET