Eric Dorland <eric.dorland@mail.mcgill.ca> writes:
> * Philip Martin (philip@codematters.co.uk) wrote:
> > I agree, we just want to test the Subversion code. However, rather
> > than just hardcoding a chunk of data, I would use a simple algorithm
> > to generate some bytes. It doesn't really matter what you use, a
> > simple 0, 1, 2, ..., 253, 254, 255, 245, 253, ... 3, 2, 1, 0 repeated
> > would probably do, or how about 256 bytes incrementing in steps of 1,
> > 256 bytes incrementing in steps of 3, 256 bytes incrementing in steps
> > of 5, ...
>
> Ok, how is this better than just the hardcoded data?
I don't know if it is significantly better, it's just the way I would
have done it.
> If the data I
> generate the data in too regular a fashion, it's going to compress too
> well and defeat the purpose of the test.
The test can check the length of the compressed data, if it's too
short it can do something, fail or generate more data.
> > However you do it, the algorithm should generate an amount of data
> > based on the value of ZBUFFER_SIZE. Is ZBUFFER_SIZE something to do
> > with the uncompressed data, or the compressed data?
>
> It relates to the compressed data. It's the amount of compressed data
> a read will pull in at a time, whenever a read is done on a compressed
> stream. I tested the hardcoded data so I know it compresses to
> something larger than ZBUFFER_SIZE (4096), which was what we wanted to
> test.
Why did you choose 4096? Is that a page size or something? What
happens if we decide to use something bigger, your test data may no
longer be large enough to overflow the buffer. I know nothing about
the zlib API, is it sensible to hard code this size? How does it
affect performance?
--
Philip Martin
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Tue Apr 1 01:06:58 2003