[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: migration to NODES

From: Hyrum K. Wright <hyrum_wright_at_mail.utexas.edu>
Date: Tue, 28 Sep 2010 09:35:50 -0500

On Tue, Sep 28, 2010 at 4:47 AM, Greg Stein <gstein_at_gmail.com> wrote:
> On Mon, Sep 27, 2010 at 13:25, Julian Foad <julian.foad_at_wandisco.com> wrote:
>> On Fri, 2010-09-24, Greg Stein wrote:
>>> On Fri, Sep 24, 2010 at 11:16, Julian Foad <julian.foad_at_wandisco.com> wrote:
>>> >...
>>> > I think we should produce a test framework that can give us a WC
>>> > containing all the different possible WC states.  Then we can write
>>> > tests against this framework, some tests that test specific state, and
>>> > other tests that apply the same operations to all the states and check
>>> > that it works in all the states that it should.
>>>
>>> This requires a manual process of thinking of all states and all
>>> permutations. I don't trust it.
>>
>> This kind of testing is more about checking that the design is
>> implemented and that the code paths are exercised ...
>>
>>> If we could somehow capture working copies from during normal test
>>> runs, *then* I think we'd have "everything". We can easily get the
>>> terminal state for each test, which is a great first step. It would be
>>> great if we could also get the intermediate steps.
>>
>> ... while this kind is more about checking for regression or unwanted
>> changes of behaviour.  The two approaches are complementary.
>
> Fair enough.
>
> I took the 1.6.x branch and ran the test suite. It produced about 950
> working copies, and the (compressed) tarball sits at about 850k. I'm
> thinking about recording 'svn status' for each working copy, and
> checking the tarball in as test data. We can then run an 'svn upgrade'
> on each working copy, then 'svn status', and verify that the two
> status results match. (caveat minor improvements in state tracking and
> status reporting)
>
> But... running upgrade on about 950 working copies and checking their
> status isn't cheap. The tarball sizes don't bother me too much... I'm
> more concerned about test suite runtime.
>
> Anybody else? Should this be "normal test run"? Or should we set up an
> "extended" set of tests and drop this into that batch?

I've been advocating for a long time the need for an "extended" test
suite. (I mean, really, merge_tests.py exercises all of the "basic"
functions in so many more ways than basic_tests.py, that the latter is
pretty much redundant when doing pre-commit checks.) This might be an
opportunity to start that ball rolling, but it's probably also a lot
of work. :)

-Hyrum
Received on 2010-09-28 16:36:29 CEST

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.