On 21.02.2011 12:04, Philip Martin wrote:
> Stefan Sperling <stsp_at_elego.de> writes:
>
>> On Mon, Feb 21, 2011 at 01:44:35PM +0530, Noorul Islam K M wrote:
>>> This patch reduces checkout by around 23 times.
>> On my system the difference is 43 seconds vs. 30 seconds.
> On my low-end Linux desktop it's 7.5 seconds and 3.5 seconds, run
> sequentially on a SATA disk.
>
>> We lose the ability to easily spot which of the subtest is failing
>> if we do this. I.e. instead of:
>>
>> ...
>> PASS: input_validation_tests.py 19: non-working copy paths for 'status'
>> FAIL: input_validation_tests.py 20: non-working copy paths for 'patch'
>> PASS: input_validation_tests.py 21: non-working copy paths for 'switch'
>> ...
>>
>> all we'd get is:
>>
>> FAIL: input_validation_tests.py 1: inavlid wc and url targets
> When a test fails the first thing I do is look in tests.log, that will
> still work just as well with the combined test. I suppose we do lose
> out if there are multiple independent bugs, as the test will stop at the
> first one and not report the others.
>
> I feel we should be optimising for the common case where the tests PASS,
> and that optimising for FAILs is the wrong approach.
>
> I agree that combining big, complex tests, like merge, is a bad idea.
> But for relatively trivial tests I think reduced runtime is more
> important.
We should not be optimising tests for performance over clarity, ever. In
other words -- don't combine them. Anyone who has trouble with tests
taking too long can use a RAMdisk, --bdb-txn-nosync, and other tweaks to
make the tests run faster.
I think you've all gone way off track here. Reality check, please? What
is the purpose of tests? To validate the behaviour of the code, or to
make developers' life easier?
-- Brane
Received on 2011-02-21 12:57:36 CET