On 7/14/07, Lieven Govaerts <firstname.lastname@example.org> wrote:
> Mark Phippard wrote:
> > Finally, I am not proposing any kind of high ceremony process or
> > document here. Just a place where people can record things that
> > either they want to get done themselves or maybe no needs to be done
> > before we can release. This issue with the fsfs transaction names is
> > a good a recent example.
> +1 on your proposal. I personally favor using the issue tracker as TODO
> list, it's already there and it allows us to assign issues and set
> There's one thing I want to add: we also have another source of TODO
> items and that's the list of XFail-ing tests in the test suite. Not only
> do they indicate certain issues or missing features, the fact that
> someone took the time to write those tests shows a genuine interest in
> getting the issue fixed or the feature implemented.
> We typically use our test suite to test for regressions, we write tests
> while developing a feature and commit them both at the same time. Some
> people already started writing acceptance tests, which I'd like to see
> happen more often. What's the difference between an acceptance and a
> regression test? Simple: you write an acceptance test upfront before
> implementing the feature or fix. It indicates how you want svn to behave
> with the new feature in place.
> The big advantages of having Xfail-ing acceptance tests are:
> - we can add a list of acceptance tests in our TODO list for each
> feature, which we consider as the minimum set of functionality required
> for release. This way we always know exactly what needs to be done.
> - Having them allows more people to work on a feature, as the expected
> behavior is clearly defined
> - Once they're written and the feature is implemented, they
> automatically act as regression tests.
> The one big disadvantage is that it's very difficult to get all the
> details of the tests exactly right upfront. While I think that's a
> problem in itself, adding a comment that a test is indicative should be
> sufficient to remove this disadvantage.
> I want to propose - as a best practice, not as a strict project
> requirement - that we add tests or placeholders for those features or
> issues we want to get fixed in 1.5 and include a reference to those
> tests in the issue description. I know we already have them for the
> '--depth' feature, it'd good to know which of them Karl considers as
> mandatory for 1.5 release.
It seems like XFAIL's that are going to live in the code for more than
a few weeks ought to have an issue associated with them right in the
text description. That would make it easy to have an extended
discussion around the test and see when it is planned to be fixed, or
what is blocking it.
I know for example there are two copy tests that XPASS on Windows if
you build with APR 1.2.8. It was nice that those tests referenced
this in the description so that I did not waste a lot of time hunting
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Sat Jul 14 15:45:15 2007