On Fri, Feb 4, 2011 at 7:20 PM, Paul Burba <ptburba_at_gmail.com> wrote:
> On Fri, Feb 4, 2011 at 2:11 PM, Hyrum K Wright <hyrum_at_hyrumwright.org> wrote:
>> On Fri, Feb 4, 2011 at 7:03 PM, Blair Zajac <blair_at_orcaware.com> wrote:
>>>
>>> On Feb 4, 2011, at 9:15 AM, Hyrum K Wright wrote:
>>>
>>>> We currently mark tests XFail (or Skip, or something else) by wrapping
>>>> them in the test_list in the test suite. Rather than doing it there,
>>>> I think it makes more sense to use Python's decorator syntax to mark
>>>> tests as XFail right at their definition, rather than down in the test
>>>> list. Keeping all attributes of a test in close proximity is a Good
>>>> Thing, imo. Attached is a patch which demonstrates this.
>>>>
>>>> Decorators were added to Python in 2.4, which is the minimal version
>>>> required for our test suite. In addition to the functional
>>>> decorators, we should be able to add ones which record other
>>>> information, such as the issues which the tests were added for. (In
>>>> some future world, we could also divide up the test suite into
>>>> "levels", and decorators could be added to indicate that.)
>>>>
>>>> Thoughts?
>>>
>>> Sounds good to me.
>>>
>>> The decorators could have a required issue version number as a parameter, thereby forcing an issue to exist in the issue tracker.
>>
>> I don't think we should require all tests to have an issue number
>> associated with them,
>
> Not *every* issue, I agree, but perhaps requiring *XFails* to have an
> associated issue wouldn't be such a bad idea (once we have issues
> associated with all the current XFails of course). It would certainly
> make the "What XFailing tests are release blockers?" question a lot
> easier to answer.
Agreed, and we could easily change the XFail decorator to require an
issue number. I suppose we could also do that with the existing XFail
infrastructure, too.
The @Issue decorator could also be used with tests which PASS, for completeness.
-Hyrum
Received on 2011-02-04 20:30:01 CET