Mark Phippard wrote:
> Lieven,
>
> What would you think about adding some small stats output to the
> existing suite? Maybe change the output to something like this:
>
> Running all tests in compat-test [1/50]...success [14/0/1/0/2]
>
> Where the numbers are [PASS/FAIL/SKIPPED/XPASS/XFAIL]
No problem of adding this, but what is the added value?
We have XFAIL and XPASS so that we can explicitly mark tests as failing
and even learn when they are unknowingly solved, it's not like we accept
tests to fail for more than a few days.
AFAIC the tests markers should be matched exactly by the actual results
at all times, so for statistical reasons all there is to know can be
gathered by listing the tests without running them, which was basically
what I did with a semi-automated python script.
> We are trying to get our QA team at CollabNet to use the test suite
> and they are wanting information like this from the output. It does
> not necessarily matter how it comes out or is formatted, the above was
> just a suggestion.
Is your QA team currently keeping a separate list of issues or
testscripts? Are they interested in knowing the status of certain
issues? Or rather the progress of feature implementation?
I'm interested to know what problem these extra stats are going to solve.
Personally I'd like to have two more types of info from the testsuite:
1. which tests are blocking the next release: This can be solved by
adding a new marker BLOCK, but I'd rather keep a TODO-1.5 file somewhere
on trunk to maintain this list. Or we could use the issue tracker, but
not all issues and features are reported there.
2. the code coverage % we reach when running the full test suite,
including the lines of code which aren't covered by any test.
Lieven
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Mon Apr 30 18:03:02 2007