Julian Foad wrote:
> On Wed, 2010-08-18, Stefan Sperling wrote:
>
>> On Tue, Aug 17, 2010 at 11:57:11PM +0200, Stefan Fuhrmann wrote:
>>
>>> I overgeneralized my use-case here: I actually needed "move forward" only.
>>> The concept of "skip N bytes", however, is perfectly in line with
>>> the general
>>> stream semantics. Because this also fits my needs, I changed the API in
>>> r986485 accordingly.
>>>
>> But the stream implementation may still need to read all bytes until N
>> anyway, to make sure the internal state is still valid after moving to
>> offset N.
>>
>> Of course, we could say that a stream implementation is free to optimize
>> for this use case if possible, but allow wrapper streams to override
>> the "move forward" implementation.
>>
>> For instance, a stream wrapping an APR file would support "move forward",
>> but as soon as you wrap this APR file stream with a translation stream,
>> the translation stream's semantics take over and the APR file stream
>> will effectively be read byte-per-byte when the "move forward" method
>> is called on the translation stream. This would allow the translation stream
>> to keep its internal state intact during a "move forward" operation.
>>
>> Performance benefits would then depend on the type of stream being used.
>>
>> Does this make sense?
>>
>
> Yes, that's exactly what I was thinking. Sounds good.
I agree to all the above. The key is that reading a chunk of data
(e.g. 80 bytes) at once from a translated stream is much faster
than reading each bytes individually because most streams buffer
their data - even translated streams use a translation buffer.
Of course, it is even better if the streams can directly skip that
data but that is not a requirement for the optimization of
stream_readline.
-- Stefan^2.
Received on 2010-08-18 20:56:22 CEST