On Mon, Sep 27, 2010 at 13:25, Julian Foad <julian.foad_at_wandisco.com> wrote:
> On Fri, 2010-09-24, Greg Stein wrote:
>> On Fri, Sep 24, 2010 at 11:16, Julian Foad <julian.foad_at_wandisco.com> wrote:
>> > I think we should produce a test framework that can give us a WC
>> > containing all the different possible WC states. Then we can write
>> > tests against this framework, some tests that test specific state, and
>> > other tests that apply the same operations to all the states and check
>> > that it works in all the states that it should.
>> This requires a manual process of thinking of all states and all
>> permutations. I don't trust it.
> This kind of testing is more about checking that the design is
> implemented and that the code paths are exercised ...
>> If we could somehow capture working copies from during normal test
>> runs, *then* I think we'd have "everything". We can easily get the
>> terminal state for each test, which is a great first step. It would be
>> great if we could also get the intermediate steps.
> ... while this kind is more about checking for regression or unwanted
> changes of behaviour. The two approaches are complementary.
I took the 1.6.x branch and ran the test suite. It produced about 950
working copies, and the (compressed) tarball sits at about 850k. I'm
thinking about recording 'svn status' for each working copy, and
checking the tarball in as test data. We can then run an 'svn upgrade'
on each working copy, then 'svn status', and verify that the two
status results match. (caveat minor improvements in state tracking and
But... running upgrade on about 950 working copies and checking their
status isn't cheap. The tarball sizes don't bother me too much... I'm
more concerned about test suite runtime.
Anybody else? Should this be "normal test run"? Or should we set up an
"extended" set of tests and drop this into that batch?
Or not even go with this approach?
Received on 2010-09-28 11:48:16 CEST