Due to controverse discussion and some opposition on the hackathon, and the impression that this issue is currently not that urgent yet, I withdraw this proposal for now.
I'll come back with it as soon as our test suite is grown so big that it warrants selective test execution. :-)
CODESYS® a trademark of 3S-Smart Software Solutions GmbH
Inspiring Automation Solutions
3S-Smart Software Solutions GmbH
Dipl.-Inf. Markus Schaber | Product Development Core Technology
Memminger Str. 151 | 87439 Kempten | Germany
Tel. +49-831-54031-979 | Fax +49-831-54031-50
E-Mail: firstname.lastname@example.org | Web: http://www.codesys.com | CODESYS store: http://store.codesys.com
CODESYS forum: http://forum.codesys.com
Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade register: Kempten HRB 6186 | Tax ID No.: DE 167014915
> -----Ursprüngliche Nachricht-----
> Von: Markus Schaber [mailto:m.schaber_at_codesys.com]
> Gesendet: Freitag, 14. Juni 2013 13:53
> An: Subversion Dev (dev_at_subversion.apache.org)
> Betreff: AW: Proposal for separating Tests into groups
> Considering Bens mails, and some personal discussions yesterday, I refine
> variant 2, and drop variant 1 (which actually was an extremely simplified
> subset of variant 2).
> 1) We define a bunch of test categories.
> - The number of categories should be small and well-arranged. To much
> categories will just be confusing and unmaintainable.
> - Some suggested categories:
> - Excessive (Tests which excessively test an isolated part of the code
> which is
> unlikely to break by unrelated changes, and take
> a lot of time to execute, like the #ifdef-ed test in svn_io right now).
> - Manual (Tests which require manual interaction, currently only 1
> - WorkingCopy (Tests which require a working copy)
> - NoRepository (Some C tests which do not involve the repository at all)
> - RaSerf, RaFile, RaSvn (Tests checking specific functionality of a RA
> - FsFs, FsBdb (Tests checking specific functionality of an FS
> - Local (Tests intended to check local working copy functionality, only
> accessing a repo as a side effect)
> - Repository (Tests intended to check access to the repository via Ra
> - Server (Tests just covering server side functionality, no client /
> copy involved)
> 2) Each test case gets annotated with one or more test
> 3) Selection of tests:
> - When the user does not specify anything, all tests except the ones
> "Excessive" and/or "Manual" are selected.
> - When the user does explicitly specify test numbers and/or categories,
> selection covers all tests which have a given test number or are marked
> at least one of the given categories, or are not marked with any category
> all. Using the special category "default" selects the default set, the
> special category "all" selects all tests including the "Excessive" and
> "Manual" tests.
> - Additionally, the user may specify a list of excluded test numbers and
> categories, which are then excluded from the selection as defined by the
> three cases above.
> 4) Calling Syntax:
> For both Python and C test executables, I propose that we just allow to-be-
> selected test categories mentioned in addition to the test numbers.
> The tests to be excluded are preceded by the --exclude option.
> Example derived from Bens use-case:
> foo.py default excessive --exclude NoRepository
> This runs all non-manual tests which actually use an RA layer.
> This can be used for the 2nd or 3rd test run when alternating ra layers, to
> not run the ra-independent tests twice.
> For make check, those lists are to be passed via the CATEGORIES and EXCLUDE
> (I'm not sure yet whether we allow to also pass single tests there, this
> would need some syntax combining the test executable name and number, like
> For the UI, I suggest category names are case insensitive, but case
> 5) Implementation details:
> In Python, I'd define an decorator @categories which one can use to assing
> categories per test case. In addition, one can assign per-module default
> categories which apply to all tests in the test_list which don't have
> explicit categories declared. The categories as well as the decorator will be
> defined in main.py.
> For the C tests, I'd define an enum for the test categories using bit flags.
> The svn_test_descriptor_t will gain an additional const field containing that
> Best regards
> Markus Schaber
> (This email was sent from a mobile device...)
> CODESYS® a trademark of 3S-Smart Software Solutions GmbH
> Inspiring Automation Solutions
> 3S-Smart Software Solutions GmbH
> Dipl.-Inf. Markus Schaber | Product Development Core Technology Memminger Str.
> 151 | 87439 Kempten | Germany Tel. +49-831-54031-979 | Fax +49-831-54031-50
> E-Mail: m.schaber_at_codesys.com | Web: codesys.com CODESYS internet forum:
> Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade
> register: Kempten HRB 6186 | Tax ID No.: DE 167014915
> Von: Ben Reser [ben_at_reser.org]
> Gesendet: Donnerstag, 13. Juni 2013 15:39
> An: Markus Schaber
> Cc: Subversion Dev (dev_at_subversion.apache.org)
> Betreff: Re: Proposal for separating Tests into groups
> On Thu, Jun 13, 2013 at 3:20 PM, Markus Schaber <m.schaber_at_codesys.com> wrote:
> > This are two alternative proposals for the test suite:
> > Rationale: Developers restrain from implementing some tests because they
> just take to much time to run at every commit.
> I don't think that's a problem with Subversion. I can run the full test
> suite against a single fs layer + ra layer in 5 minutes.
> Depending on what I'm touching I may decide to run one or more but even if I
> test all 3 ra layers that's only 15 minutes.
> I don't recall anyone ever saying I didn't write a test for that because it
> would take too long to run. However, I can certainly say people have avoided
> writing tests in this project because the tests would take too long to write
> (especially our C level tests vs cmdline tests). I can also say that people
> have avoided writing tests because our test harness for the server side
> doesn't support changing the server configuration per test.
> > Other test systems like JUnit, NUnit or the CODESYS Test Manager come with
> ways to select unit tests by category, so we could implement something
> similar with our tests.
> > 1) Just two test sets: simple and extended tests:
> > Tests which take a lot of time and cover areas which are unlikely to break
> can be marked as extended tests. For Python, there will be an @extended
> decorator, for the C tests we'll have macros like SVN_TEST_EXTENDED_PASS.
> > Then running the test with an command line option --simple will skip those
> tests. Explicitly mentioning extended tests by number will still run them,
> and can be combined with the --simple flag.
> I really don't see a reason to do this.
> > Continous integration systems and the tests before signing an release
> should still execute the full test suite.
> > But before a small, not destabilizing commit (use common sense here),
> > only running the non-extended tests is mandatory (and maybe the
> > extended tests covering that specific area.)
> There is absolutely no way to enforce a test run in this project. So the
> entire concept of a mandatory test run before committing is pointless. What
> tests a developer runs is ALWAYS going to be a matter of the developer using
> their own judgement. For the most part I don't see too many broken things
> being committed that are broken even with our current test suite situation.
> When broken things are committed it's usually because the developer didn't
> understand their change was impacted by ra or fs differences and the only
> thing that would have prevented it would have been more testing, not less.
> > For make check, it would be an SIMPLE=true variable.
> > Alternatively:
> > 2) Test Categories:
> > A set of categories is defined in a central place.
> > Examples for such categories could be:
> > - Smoke: For smoke tests, only the most important.
> > - Fsfs: Tests covering only the FSFS specific code.
> > - Offline: Tests which do not contact the server.
> > - Repository: Tests which cover the repository, without a client
> > involved (e. G. svnadmin)
> > Each test then gets attributed with the categories which are valid for this
> > When running the tests, one could pass a parameter to run only the tests
> which are attributed with at least one of the given flags. For example, if
> you changed something in FSFS, "--categories=Smoke,Fsfs" would run the smoke
> tests and the FSFS tests. A second "--exclude=Repository,5,7" switch could be
> used to exclude test categories as well as single tests by number.
> > For make check, we'd have a CATEGORIES and EXCLUDE variables.
> I'm more in the favor of something like this because right now some tests
> don't use an RA layer or even an FS layer (i.e. some C tests).
> If you run tests across all FS and RA layers you end up running these tests
> multiple times. Granted that most of these tests are relatively fast, there
> is still duplication.
Received on 2013-06-17 16:37:43 CEST