Hi Markus,
On Mon, Dec 3, 2012 at 10:43 AM, Markus Schaber <m.schaber_at_codesys.com> wrote:
> Hi,
>
> Just another crazy idea:
>
> The main problem with the parallelization in ra_serf seems to be the number of http requests (which potentially causes a high number of authentications and tcp connections).
>
> Maybe we could add some partitioned send-all request:
>
> When the client decides to use 4 connections, it could send 4 requests, with some parameter like send-all(1/4), send-all(2/4), ..., send-all(4/4).
>
> Then the server can send one quarter of the complete response on each connection.
>
> An advanced server could even share the common state of those 4 requests through some shared memory / caching scheme, to avoid doing the same work multiple times.
>
> Years ago, I implemented a similar scheme between caching GIS web frontend servers, and the rendering backend server, in the protocol for fetching and rendering the map tiles. It gave a nearly linear speedup with the number of connections, up to the point where the CPUs were saturated.
>
the concept implemented in ra_serf is to parallelize individual GET
requests, so that these can be cached by proxies either on the client
or on the server side. So we want to avoid using send-all as much as
possible, as this creates always one large uncacheable response.
I've made a mental note of your idea though, if we need to stick with
send-all and further improve it, your suggestion is one way to do
this.
>
> Best regards
>
> Markus Schaber
>
thanks,
Lieven
[..]
Received on 2012-12-10 21:01:29 CET