[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: migration to NODES

From: Greg Stein <gstein_at_gmail.com>
Date: Tue, 28 Sep 2010 05:47:36 -0400

On Mon, Sep 27, 2010 at 13:25, Julian Foad <julian.foad_at_wandisco.com> wrote:
> On Fri, 2010-09-24, Greg Stein wrote:
>> On Fri, Sep 24, 2010 at 11:16, Julian Foad <julian.foad_at_wandisco.com> wrote:
>> >...
>> > I think we should produce a test framework that can give us a WC
>> > containing all the different possible WC states.  Then we can write
>> > tests against this framework, some tests that test specific state, and
>> > other tests that apply the same operations to all the states and check
>> > that it works in all the states that it should.
>>
>> This requires a manual process of thinking of all states and all
>> permutations. I don't trust it.
>
> This kind of testing is more about checking that the design is
> implemented and that the code paths are exercised ...
>
>> If we could somehow capture working copies from during normal test
>> runs, *then* I think we'd have "everything". We can easily get the
>> terminal state for each test, which is a great first step. It would be
>> great if we could also get the intermediate steps.
>
> ... while this kind is more about checking for regression or unwanted
> changes of behaviour.  The two approaches are complementary.

Fair enough.

I took the 1.6.x branch and ran the test suite. It produced about 950
working copies, and the (compressed) tarball sits at about 850k. I'm
thinking about recording 'svn status' for each working copy, and
checking the tarball in as test data. We can then run an 'svn upgrade'
on each working copy, then 'svn status', and verify that the two
status results match. (caveat minor improvements in state tracking and
status reporting)

But... running upgrade on about 950 working copies and checking their
status isn't cheap. The tarball sizes don't bother me too much... I'm
more concerned about test suite runtime.

Anybody else? Should this be "normal test run"? Or should we set up an
"extended" set of tests and drop this into that batch?

Or not even go with this approach?

>...

Cheers,
-g
Received on 2010-09-28 11:48:16 CEST

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.