[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Test XFail/Skip policy (was: svn commit: r36439 - trunk/subversion/tests/cmdline)

From: Stefan Sperling <stsp_at_elego.de>
Date: Tue, 10 Mar 2009 14:38:10 +0000

On Tue, Mar 10, 2009 at 10:06:29AM -0400, C. Michael Pilato wrote:
> > Isn't the whole point of the test suite to identify bugs? So, if you
> > unconditionally Skip a test that fails on some platforms, it'll never
> > get fixed because it'll never get run, and no-one ever reviews skipped
> > tetss. Might as well remove the test then, what's the point?
> >
> > Or maybe fix the bug, who knows.
> >
> > -- Brane
>
> I could really use some clarification on this, myself.
>
> As Brane rightly says, the point of the test suite is to identify bugs. Now
> obviously, bugs come in various shapes and sizes, different complexities and
> severities. Some bugs we can live with for now; some we need to fix
> immediately.
>
> Our test suite provides a way to flag our level of concern about various
> bugs, too, I think. I'm just not sure we're effectively taking advantage of
> it. The following is the set of rules I've carried in my head about our
> test decorators:
>
> Skip() should be reserved for test cases that don't make sense in a given
> scenario. This is *not* a conditional XFail() -- it's like a "N/A" (not
> applicable) marker, and only that. (I suppose it could also be used
> without condition for test cases which themselves aren't completely well-
> formed, but I'd be in favor of an alias BadTest() or somesuch to disam-
> biguate that case.)
>
> XFail() is for tests that fail for known reasons when those known reasons
> are deemed to be of *shrug*-level severity. You'd like the test to pass,
> but you wouldn't hold up a release for the bug that's causing it not to.
>
> Now, I don't always obey these nicely defined rules myself. When adding a
> new test, it's pretty common for me (and others) to write the test, XFail()
> it (without considering the bug's severity), and commit. Then later, we fix
> the bug, remove the XFail(), and commit the fix. As routine as this has
> become, I think this undermines the effectiveness of our test suite. I
> *should* (in my opinion) be applying the rules I listed above when composing
> that regression test, which means possibly adding a test *not* marked as
> XFail() -- even when I know it's going to fail -- if the bug's severity
> dictates as much. What if, despite my best intentions to fix the bug
> immediately, I get tied up elsewhere after composing the test but before
> fixing the bug? Do any of us routinely survey the XFail()ing tests to see
> if any particularly horrid bugs are "expected"? I know I don't. A failing
> test -- and the public outrage that accompanies it -- are infinitely louder
> than an XFail()ing test.
>
> On the flip side, I'm also aware of the "Cry Wolf" effect that can happen if
> the test suite is always failing. It benefits no one if we grow calloused
> to the notifications of our early alert system to the point of ignoring it.
>
> Am I off-base here? Is there a better policy summary that someone could
> whip up (and add to hacking.html)?

I'd map categories like this:

default # expected to pass, alert me when it fails
XFAIL # known to fail, alert me if this suddenly passes again
SKIP # cannot run this test in this situation

Now, for the scenario at stake, do we need another test category?
A test that is known to fail, but should not be silenced like XFAIL
or SKIP do? What about KNOWN?

KNOWN # known failure, unconditionally alert me anytime this test is run,
      # either the test is broken or there's a bug somewhere, somebody
      # should look at this asap, but don't hold the buildbots on it,
      # and if you suddenly see this in your local WC, you likely didn't
      # break it

While not perfect, this would convey more information than XFAIL/SKIP.
Maybe there should also be an attribute or comment that indicates
who is trying to fix the problem, or point to an issue number,
so that others can enquire about the problem's status.

Would something like this be useful?

Stefan
Received on 2009-03-10 15:38:46 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.