[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Status report of testsuite on trunk @ r24830

From: Mark Phippard <markphip_at_gmail.com>
Date: 2007-04-30 18:38:21 CEST

On 4/30/07, Lieven Govaerts <svnlgo@mobsol.be> wrote:
> Mark Phippard wrote:
> > Lieven,
> >
> > What would you think about adding some small stats output to the
> > existing suite? Maybe change the output to something like this:
> >
> > Running all tests in compat-test [1/50]...success [14/0/1/0/2]
> >
> > Where the numbers are [PASS/FAIL/SKIPPED/XPASS/XFAIL]
>
> No problem of adding this, but what is the added value?

To be honest, I asked the same question and never felt I understood
their reasons. It seemed criticial to them to be offer an accurate
accounting of the number of tests they ran though. I was trying to
say well if you know there are 500 tests and 2 failed and 4 skipped
you know that 494 passed.

> We have XFAIL and XPASS so that we can explicitly mark tests as failing
> and even learn when they are unknowingly solved, it's not like we accept
> tests to fail for more than a few days.

I was under the impression that XFAIL tests are occasionally added
when a problem is recognied and will live in the code until a fix is
made, which could be a long time. I am less concerned with listing
these other than to keep an accurate accounting.

> AFAIC the tests markers should be matched exactly by the actual results
> at all times, so for statistical reasons all there is to know can be
> gathered by listing the tests without running them, which was basically
> what I did with a semi-automated python script.

How do you do this? I was not aware there were methods other than
running the tests. Feel free to point me at some docs. I just know
make check.

> > We are trying to get our QA team at CollabNet to use the test suite
> > and they are wanting information like this from the output. It does
> > not necessarily matter how it comes out or is formatted, the above was
> > just a suggestion.
>
> Is your QA team currently keeping a separate list of issues or
> testscripts? Are they interested in knowing the status of certain
> issues? Or rather the progress of feature implementation?
> I'm interested to know what problem these extra stats are going to solve.

My team is not involved in the development process specifically. We
are certifying release builds that we are releasing on openCollabNet.
So we would generally expect tests to pass unless there was some
problem in our build process. These sound like cool ideas and
features though, especially the code coverage.

-- 
Thanks
Mark Phippard
http://markphip.blogspot.com/
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Mon Apr 30 18:38:30 2007

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.