On Thu, Jun 13, 2013 at 3:20 PM, Markus Schaber <m.schaber_at_codesys.com> wrote:
> This are two alternative proposals for the test suite:
>
> Rationale: Developers restrain from implementing some tests because they just take to much time to run at every commit.
I don't think that's a problem with Subversion. I can run the full
test suite against a single fs layer + ra layer in 5 minutes.
Depending on what I'm touching I may decide to run one or more but
even if I test all 3 ra layers that's only 15 minutes.
I don't recall anyone ever saying I didn't write a test for that
because it would take too long to run. However, I can certainly say
people have avoided writing tests in this project because the tests
would take too long to write (especially our C level tests vs cmdline
tests). I can also say that people have avoided writing tests because
our test harness for the server side doesn't support changing the
server configuration per test.
> Other test systems like JUnit, NUnit or the CODESYS Test Manager come with ways to select unit tests by category, so we could implement something similar with our tests.
>
> 1) Just two test sets: simple and extended tests:
> Tests which take a lot of time and cover areas which are unlikely to break can be marked as extended tests. For Python, there will be an @extended decorator, for the C tests we'll have macros like SVN_TEST_EXTENDED_PASS.
>
> Then running the test with an command line option --simple will skip those tests. Explicitly mentioning extended tests by number will still run them, and can be combined with the --simple flag.
I really don't see a reason to do this.
> Continous integration systems and the tests before signing an release should still execute the full test suite.
>
> But before a small, not destabilizing commit (use common sense here), only running the non-extended tests is mandatory (and maybe the extended tests covering that specific area.)
There is absolutely no way to enforce a test run in this project. So
the entire concept of a mandatory test run before committing is
pointless. What tests a developer runs is ALWAYS going to be a matter
of the developer using their own judgement. For the most part I don't
see too many broken things being committed that are broken even with
our current test suite situation. When broken things are committed
it's usually because the developer didn't understand their change was
impacted by ra or fs differences and the only thing that would have
prevented it would have been more testing, not less.
> For make check, it would be an SIMPLE=true variable.
>
> Alternatively:
> 2) Test Categories:
> A set of categories is defined in a central place.
>
> Examples for such categories could be:
> - Smoke: For smoke tests, only the most important.
> - Fsfs: Tests covering only the FSFS specific code.
> - Offline: Tests which do not contact the server.
> - Repository: Tests which cover the repository, without a client involved (e. G. svnadmin)
>
> Each test then gets attributed with the categories which are valid for this test.
>
> When running the tests, one could pass a parameter to run only the tests which are attributed with at least one of the given flags. For example, if you changed something in FSFS, "--categories=Smoke,Fsfs" would run the smoke tests and the FSFS tests. A second "--exclude=Repository,5,7" switch could be used to exclude test categories as well as single tests by number.
>
> For make check, we'd have a CATEGORIES and EXCLUDE variables.
I'm more in the favor of something like this because right now some
tests don't use an RA layer or even an FS layer (i.e. some C tests).
If you run tests across all FS and RA layers you end up running these
tests multiple times. Granted that most of these tests are relatively
fast, there is still duplication.
Received on 2013-06-13 15:40:17 CEST