Branko Čibej <brane_at_apache.org> writes:
> Of course latency, for practical purposes, tells you how many gateways
> there are between the client and the server, not what the effective
> bandwidth is.
Overall, my intention here was to improve what I think is a reasonably
common case with the server located "in the same building" and with the
repository containing a lot of large and, possibly, incompressible files
(assets, documents, etc.), without affecting other cases. Currently, in
such scenario with the default configuration both the HTTP client and the
server are doing _a lot of_ unnecessary compression work, and that visibly
slows things down.
The assumption here is that low-latency connections should most likely have
enough bandwidth to cover the difference between compression ratios of
LZ4 and zlib-5, and allow us to use the much faster compression algorithm.
(Not too sure if it's even possible to determine the effective bandwidth
beforehand, considering things like TCP auto-scaling.)
To avoid potential regressions, the current approach will always fall back
to zlib-5. While there might be cases where it could result in a suboptimal
decision, e.g., for fat networks with medium/high latency (and that's not
so obvious, as the traffic can have a cost), I think that, probably, it
should work well in practice for the case described above and avoid
regressions in other cases.
Received on 2017-08-04 16:45:37 CEST