[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Test XFail/Skip policy (was: svn commit: r36439 - trunk/subversion/tests/cmdline)

From: C. Michael Pilato <cmpilato_at_collab.net>
Date: Tue, 10 Mar 2009 10:06:29 -0400

> Isn't the whole point of the test suite to identify bugs? So, if you
> unconditionally Skip a test that fails on some platforms, it'll never
> get fixed because it'll never get run, and no-one ever reviews skipped
> tetss. Might as well remove the test then, what's the point?
>
> Or maybe fix the bug, who knows.
>
> -- Brane

I could really use some clarification on this, myself.

As Brane rightly says, the point of the test suite is to identify bugs. Now
obviously, bugs come in various shapes and sizes, different complexities and
severities. Some bugs we can live with for now; some we need to fix
immediately.

Our test suite provides a way to flag our level of concern about various
bugs, too, I think. I'm just not sure we're effectively taking advantage of
it. The following is the set of rules I've carried in my head about our
test decorators:

   Skip() should be reserved for test cases that don't make sense in a given
   scenario. This is *not* a conditional XFail() -- it's like a "N/A" (not
   applicable) marker, and only that. (I suppose it could also be used
   without condition for test cases which themselves aren't completely well-
   formed, but I'd be in favor of an alias BadTest() or somesuch to disam-
   biguate that case.)

   XFail() is for tests that fail for known reasons when those known reasons
   are deemed to be of *shrug*-level severity. You'd like the test to pass,
   but you wouldn't hold up a release for the bug that's causing it not to.

Now, I don't always obey these nicely defined rules myself. When adding a
new test, it's pretty common for me (and others) to write the test, XFail()
it (without considering the bug's severity), and commit. Then later, we fix
the bug, remove the XFail(), and commit the fix. As routine as this has
become, I think this undermines the effectiveness of our test suite. I
*should* (in my opinion) be applying the rules I listed above when composing
that regression test, which means possibly adding a test *not* marked as
XFail() -- even when I know it's going to fail -- if the bug's severity
dictates as much. What if, despite my best intentions to fix the bug
immediately, I get tied up elsewhere after composing the test but before
fixing the bug? Do any of us routinely survey the XFail()ing tests to see
if any particularly horrid bugs are "expected"? I know I don't. A failing
test -- and the public outrage that accompanies it -- are infinitely louder
than an XFail()ing test.

On the flip side, I'm also aware of the "Cry Wolf" effect that can happen if
the test suite is always failing. It benefits no one if we grow calloused
to the notifications of our early alert system to the point of ignoring it.

Am I off-base here? Is there a better policy summary that someone could
whip up (and add to hacking.html)?

-- 
C. Michael Pilato <cmpilato_at_collab.net>
CollabNet   <>   www.collab.net   <>   Distributed Development On Demand
------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=462&dsMessageId=1303205

Received on 2009-03-10 15:06:53 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.