Eric Gillespie wrote:
> "Erik Huelsmann" <ehuels@gmail.com> writes:
>
>
>> Yes. But you already pointed that out in that e-mail. What use is
>> buildbot to others if this *known* regression makes all ra_neon builds
>> fail? I'm strongly against committing failing tests knowingly, for
>>
>
> What? Why does it make builds fail? It should only result in
> this one test showing up as failing in automated test runs.
> Which is as it should be, no? We don't want all lights green if
> the code is known broken!
>
The best way to handle a FAILing test is to detect it asap and then fix
it asap.
If fixing the failed test isn't an option, we have to choose between:
- reverting the change that causes the test to fail
- marking the test as XFail and reporting an issue with milestone 1.5
(since it's a regression, we want to have it fixed before releasing)
- keeping the FAILed test. Here buildbot will start to get useless.
What's the use of being reminded of a failing test 5 times a day, it
only makes you ignore those emails/RSS feeds.
To keep buildbot useful for the project, the tests should pass most of
the time, and if they're failing that should be a temporary situation.
The longer it takes to bring buildbot lights back to green the bigger
the risk more failing tests show up which are getting ignored in the
flood of emails. We have other and better ways to indicate the
Subversion code is known broken (in certain places), use Issues and
XFailing tests for that.
Lieven
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Thu Aug 9 21:51:29 2007